Re: [FORGED] Re: Firefox removes UI for site identity

2019-10-25 Thread Phillip Hallam-Baker via dev-security-policy
On Fri, Oct 25, 2019 at 4:21 AM James Burton  wrote:

> Extended validation was introduced at a time when mostly everyone browsed
> the internet using low/medium resolution large screen devices that provided
> the room for an extended validation style visual security indicator .
> Everything has moved on and purchases are made on small screen devices that
> has no room to support an extended validation style visual security
> indicator. Apple supported  extended validation style visual security
> indicator in iOS browser and it failed [1] [2].
>
> It's right that we are removing the extended validation style visual
> security indicator from browsers because of a) the above statement b)
> normal users don't understand extended validation style visual security
> indicators c) the inconsistencies of extended validation style visual
> security indicator between browsers d) users can't tell who is real or not
> based on extended validation style visual security indicators as company
> names sometimes don't match the actual site name.
>
> [1]  https://www.typewritten.net/writer/ev-phishing
> [2]  https://stripe.ian.sh
>

The original proposal that led to EV was actually to validate the company
logos and present them as logotype.
There was a ballot proposed here to bar any attempt to even experiment with
logotype. This was withdrawn after I pointed out to Mozilla staff that
there was an obvious anti-Trust concern in using the threat of withdrawing
roots from a browser with 5% market share to suppress deployment of any
feature.

Now for the record, that is what a threat looks like: we will destroy your
company if you do not comply with our demands. Asking to contact the
Mozilla or Google lawyers because they really need to know what one of
their employees is doing is not.

Again, the brief here is to provide security signals that allow the user to
protect themselves.


-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Firefox removes UI for site identity

2019-10-24 Thread Phillip Hallam-Baker via dev-security-policy
On Thu, Oct 24, 2019 at 9:54 PM Peter Gutmann via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Paul Walsh via dev-security-policy 
> writes:
>
> >we conducted the same research with 85,000 active users over a period of
> >12 months
>
> As I've already pointed out weeks ago when you first raised this, your
> marketing department conducted a survey of EV marketing effectiveness.  If
> you have a refereed, peer-reviewed study published at a conference or in
> an academic journal, please reference it, not a marketing survey
> masquerading as a "study".


There are certainly problems with doing usability research. But right now
there is very little funding for academic studies that are worth reading.

You didn't criticize the paper with 27 subjects split into three groups
from 2007. Nor did you criticize the fact that the conclusions were totally
misrepresented.

So it doesn't appear to be spurious research that you have a problem with
or the misrepresentation of the results. What you seem to have a problem
with is the conclusions.

At least with 85,000 subjects there is some chance that Paul himself has
found out something of interest. That doesn't mean that we have to accept
his conclusions as correct, or incontrovertible but I think it does mean
that he deserves to be treated with respect.
I am not at all happy with the way this discussion has gone. It seems that
contrary to the claims of openness, Mozilla has a group think problem. For
some reason it is entirely acceptable to attack CAs for any reason and with
the flimsiest of evidence.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Firefox removes UI for site identity

2019-10-24 Thread Phillip Hallam-Baker via dev-security-policy
On Thu, Oct 24, 2019 at 5:31 PM Paul Walsh  wrote:

> So, the next time a person says “EV is broken” or “website identity can’t
> work” please think about what I just said and imagine actual browser
> designers and developers who were/are responsible for that work. They were
> never given a chance to get it right.
>

The point I wanted to bring to people's attention here is that the world
has moved on since. At the present moment we are engaged in a political
crisis on both sides of the Atlantic. Those are the particular issues on
which I have been focused and those are the issues that I expect will be my
primary concern for a few months longer.

But one way or another, those issues will eventually be resolved. And as
soon as that happens, the blamestorming will begin. And once they have run
out of the guilty, they will be going after the innocent (as of course will
the people who were also guilty hoping to deflect attention from their own
culpability). And who else is there going to be left to blame who is
withing reach apart from 'BigTech'?

The security usability approach of the 1990s doesn't work any more. We
don't need people to tell us what doesn't work, we need people who are
committed to making it work.

The brief here is how to provide people with a way that they can be safe on
the Internet that they can use. That includes providing them with a means
of being able to tell a fake site from a real one. That also includes the
entirely separate problem of how to prevent phishing type attacks.


And one of the things we need to start doing is being honest about what the
research actually shows. From the paper cited by Julien.

" The participants who were asked to read the Internet Explorer help file
were more likely to classify both real and fake sites as legitimate
whenever the phishing warning did not appear."

This is actually the exact opposite of the misleading impression he gave of
the research.

The green bar is not enough, I never expected it to be. To be successful,
the green bar required the browser providers to provide a consistent UI
that users could rely on and explain what it means. It seems that every day
I am turning on a device or starting an app only to be told it has updated
and they want to tell me about some new feature they have added. Why is it
only the features that the providers want to tell me about get that
treatment? Why not also use it to tell people how to be safe.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Firefox removes UI for site identity

2019-10-24 Thread Phillip Hallam-Baker via dev-security-policy
Eric,

I am not going to be gaslighted here.

Just what was your email supposed to do other than "suppressing dialogue
within this community"?

I was making no threat, but if I was still working for a CA, I would
certainly get the impression that you were threatening me.

The bullying and unprofessional behavior of a certain individual is one of
the reasons I have stopped engaging in CABForum, an organization I
co-founded. My contributions to this industry began in 1992 when I began
working on the Web with Tim Berners-Lee at CERN.


The fact that employees who work on what is the third largest browser also
participate in the technical and policy discussions of the third largest
browser which is also the only multi-party competitor should be a serious
concern to Google and Mozilla. It is a clear anti-Trust liability to both
concerns. People here might think that convenient, but it is not the sort
of arrangement I for one would like to be having to defend in Congressional
hearings.

As I said, I do not make threats. My concern here is that we have lost
public confidence. We are no longer the heroes we once were and politicians
in your own party are now running against 'Big Tech'. We already had DoH
raised in the House this week and there is more to come. We have six months
at most to put our house in order.



On Thu, Oct 24, 2019 at 12:29 PM Eric Mill  wrote:

> Phillip, that was an unprofessional contribution to this list, that could
> be read as a legal threat, and could contribute to suppressing dialogue
> within this community. And, given that the employee to which it is clear
> you are referring is not only a respected community member, but literally a
> peer of the Mozilla Root Program, it is particularly unhelpful to Mozilla's
> basic operations.
>
> On Wed, Oct 23, 2019 at 10:33 AM Phillip Hallam-Baker via
> dev-security-policy  wrote:
>
>> On Tue, Oct 22, 2019 at 7:49 PM Matt Palmer via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>> > On Tue, Oct 22, 2019 at 03:35:52PM -0700, Kirk Hall via
>> > dev-security-policy wrote:
>> > > I also have a question for Mozilla on the removal of the EV UI.
>> >
>> > This is a mischaracterisation.  The EV UI has not been removed, it has
>> been
>> > moved to a new location.
>> >
>> > > So my question to Mozilla is, why did Mozilla post this as a subject
>> on
>> > > the mozilla.dev.security.policy list if it didn't plan to interact
>> with
>> > > members of the community who took the time to post responses?
>> >
>> > What leads you to believe that Mozilla didn't plan to interact with
>> members
>> > of the community?  It is entirely plausible that if any useful responses
>> > that warranted interaction were made, interaction would have occurred.
>> >
>> > I don't believe that Mozilla is obliged to respond to people who have
>> > nothing useful to contribute, and who don't accurately describe the
>> change
>> > being made.
>> >
>> > > This issue started with a posting by Mozilla on August 12, but despite
>> > 237
>> > > subsequent postings from many members of the Mozilla community, I
>> don't
>> > > think Mozilla staff ever responded to anything or anyone - not to
>> explain
>> > > or justify the decision, not to argue.  Just silence.
>> >
>> > I think the decision was explained and justified in the initial
>> > announcement.  No information that contradicted the provided
>> justification
>> > was presented, so I don't see what argument was required.
>> >
>> > > In the future, if Mozilla has already made up its mind and is not
>> > > interested in hearing back from the community, it might be better NOT
>> to
>> > > start a discussion on the list soliciting feedback.
>> >
>> > Soliciting feedback and hearing back from the community does not require
>> > response from Mozilla, merely reading.  Do you have any evidence that
>> > Mozilla staff did not, in fact, read the feedback that was given?
>> >
>>
>> If you are representing yourselves as having an open process, the lack of
>> response on the list does undermine that claim. The lack of interaction on
>> that particular topic actually speaks volumes.
>>
>> Both parties in Congress have already signalled that they intend to go
>> after 'big tech'. Security is an obvious issue to focus on. While it is
>> unlikely Mozilla will be a target of those discussions, Google certainly
>> is
>> and one employee in particular.
>>
>> This is the point at which the smart people are going to lawyer up.
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>>
>
>
> --
> Eric Mill
> 617-314-0966 | konklone.com | @konklone <https://twitter.com/konklone>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Firefox removes UI for site identity

2019-10-23 Thread Phillip Hallam-Baker via dev-security-policy
On Tue, Oct 22, 2019 at 7:49 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tue, Oct 22, 2019 at 03:35:52PM -0700, Kirk Hall via
> dev-security-policy wrote:
> > I also have a question for Mozilla on the removal of the EV UI.
>
> This is a mischaracterisation.  The EV UI has not been removed, it has been
> moved to a new location.
>
> > So my question to Mozilla is, why did Mozilla post this as a subject on
> > the mozilla.dev.security.policy list if it didn't plan to interact with
> > members of the community who took the time to post responses?
>
> What leads you to believe that Mozilla didn't plan to interact with members
> of the community?  It is entirely plausible that if any useful responses
> that warranted interaction were made, interaction would have occurred.
>
> I don't believe that Mozilla is obliged to respond to people who have
> nothing useful to contribute, and who don't accurately describe the change
> being made.
>
> > This issue started with a posting by Mozilla on August 12, but despite
> 237
> > subsequent postings from many members of the Mozilla community, I don't
> > think Mozilla staff ever responded to anything or anyone - not to explain
> > or justify the decision, not to argue.  Just silence.
>
> I think the decision was explained and justified in the initial
> announcement.  No information that contradicted the provided justification
> was presented, so I don't see what argument was required.
>
> > In the future, if Mozilla has already made up its mind and is not
> > interested in hearing back from the community, it might be better NOT to
> > start a discussion on the list soliciting feedback.
>
> Soliciting feedback and hearing back from the community does not require
> response from Mozilla, merely reading.  Do you have any evidence that
> Mozilla staff did not, in fact, read the feedback that was given?
>

If you are representing yourselves as having an open process, the lack of
response on the list does undermine that claim. The lack of interaction on
that particular topic actually speaks volumes.

Both parties in Congress have already signalled that they intend to go
after 'big tech'. Security is an obvious issue to focus on. While it is
unlikely Mozilla will be a target of those discussions, Google certainly is
and one employee in particular.

This is the point at which the smart people are going to lawyer up.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-19 Thread Phillip Hallam-Baker via dev-security-policy
Like I said, expect to defend this in House and Senate hearings.

This is a restraint of trade. You are using your market power to impede
development of the market.

Mozilla corp made no complaint when VeriSign deployed Issuer LogoTypes.


On Tue, Jul 16, 2019 at 8:17 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> It seems to me that this discussion has veered away from the original
> question, which was seeking consent to "experiment" with logotypes in
> publicly-trusted certificates. I don't think there is much doubt that RFC
> 3709 has been and can be implemented, and as pointed out, it can be tested
> in private hierarchies. I fail to understand the point of this type of
> "experiment", especially when it leaves all of the difficult questions -
> such as global trademark validation and the potential to mislead users -
> unanswered. The risks of permitting such "experimentation" appear to far
> outweigh the benefits.
>
> The discussion has morphed into a question of a CA's right to encode
> additional information into a publicly-trusted certificate, beyond the
> common profile defined in the BRs, for use in a subset of Browsers or other
> client software. The argument here seems to be that BR 7.1.2.4(b)
> ("semantics that, if included, will mislead a Relying Party about the
> certificate information") can't be triggered if the user agent doesn't
> understand the data, or that there needs to be proof that the data is
> misleading (versus could be misleading) to trigger that clause. This seems
> like a much more difficult problem to solve, and one that doesn't need to
> be addressed in the context of the original question.
>
> Given this, and the fact that I believe it is in everyone's best interest
> to resolve the current ambiguity over Mozilla's policy on logotypes, I
> again propose to add logotype extensions to our Forbidden Practices[1], as
> follows:
>
> ** Logotype Extension **
> Due to the risk of misleading Relying Parties and the lack of defined
> validation standards for information contained in this field, as discussed
> here [2], CAs MUST NOT include the RFC 3709 Logotype extension in CA or
> Subscriber certificates.
>
> I will greatly appreciate additional feedback on my analysis and proposal.
>
> - Wayne
>
> [1] https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices
> [2]
>
> https://groups.google.com/d/msg/mozilla.dev.security.policy/nZoK5akw2c8/ZtF0WZY8AgAJ
>
> On Fri, Jul 12, 2019 at 2:26 PM Ryan Sleevi  wrote:
>
> > And they will mislead relying parties. Which is why you cannot use *this*
> > particular extension. Sorry, that ship sailed in 2005.
> >
> > A CA that would be remotely be considering exercising this clause would
> > strongly benefit from checking with the Root stores they’re in, no matter
> > the extension proposed.
> >
> > It’s also Subject Identifying Information.
> >
> > On Fri, Jul 12, 2019 at 5:11 PM Jeremy Rowley <
> jeremy.row...@digicert.com>
> > wrote:
> >
> >> The language of the BRs is pretty permissive.  Assuming Mozilla didn't
> >> update its policy, then issuance would be permitted if the CA can show
> that
> >> the following was false:
> >>
> >> b. semantics that, if included, will mislead a Relying Party about the
> >> certificate information verified by
> >> the CA (such as including extendedKeyUsage value for a smart card, where
> >> the CA is not able to verify
> >> that the corresponding Private Key is confined to such hardware due to
> >> remote issuance)..
> >>
> >> I think this is section you are citing as prohibiting issuance correct?
> >> So as long as the CA can show that this is not true, then issuance is
> >> permitted under the current policy.
> >>
> >>
> >>
> >> -Original Message-
> >> From: dev-security-policy <
> dev-security-policy-boun...@lists.mozilla.org>
> >> On Behalf Of Ryan Sleevi via dev-security-policy
> >> Sent: Friday, July 12, 2019 3:01 PM
> >> To: Doug Beattie 
> >> Cc: mozilla-dev-security-policy <
> >> mozilla-dev-security-pol...@lists.mozilla.org>; Wayne Thayer <
> >> wtha...@mozilla.com>
> >> Subject: Re: Logotype extensions
> >>
> >> Alternatively:
> >>
> >> There is zero reason these should be included in publicly trusted certs
> >> used for TLS, and ample harm. It is not necessary nor essential to
> securing
> >> TLS, and that should remain the utmost priority.
> >>
> >> CAs that wish to issue such certificates can do so from alternate
> >> hierarchies. There is zero reason to assume the world of PKI is limited
> to
> >> TLS, and tremendous harm has been done to the ecosystem, as clearly and
> >> obviously demonstrated by the failures of CAs to correctly implement and
> >> validate the information in a certificate, or timely revoke them. The
> fact
> >> that were multiple CAs who issued certificates like “Some-State” is a
> >> damning indictment not just on those CAs, but in an industry that does
> not
> >> see such certificates as an existential threat to the 

Re: Logotype extensions

2019-07-11 Thread Phillip Hallam-Baker via dev-security-policy
On Thu, Jul 11, 2019 at 12:19 PM Wayne Thayer  wrote:

> On Wed, Jul 10, 2019 at 7:26 PM Phillip Hallam-Baker <
> ph...@hallambaker.com> wrote:
>
>> Because then the Mozilla ban will be used to prevent any work on
>> logotypes in CABForum and the lack of CABForum rules will be used as
>> pretext for not removing the ban.
>>
>> I have been doing standards for 30 years. You know this is exactly how
>> that game always plays out.
>>
>
> Citation please? The last two examples I can recall of a Browser
> clarifying or overriding CAB Forum policy are:
> 1. banning organizationIdentifier - resulting in ballot SC17 [1] , which
> properly defines the requirements for using this Subject attribute.
> 2. banning domain validation method #10 - resulting in the ACME TLS ALPN
> challenge [2], which is nearly through the standards process.
>
> In both examples, it appears that Browser policy encouraged the
> development of standards.
>

It is what happened when I proposed logotypes ten years ago.



> If you don't want to use the extension, that is fine. But if you attempt
>> to prohibit anything, ruin it by your lawyers first and ask them how it is
>> not an a restriction on trade.
>>
>> It is one thing for CABForum to make that requirement, quite another for
>> Mozilla to use its considerable market power to prevent other browser
>> providers making use of LogoTypes.
>>
>
> If this proposal applied to any certificate issued by a CA, I might agree,
> but it doesn't. CAs are free to do whatever they want with hierarchies that
> aren't trusted by Mozilla. It's not clear to me how a CA would get a
> profile including a Logotype through a BR audit, but that's beside the
> point.
>

Since Mozilla uses the same hierarchies that are used by all the other
browsers and are the only hierarchies available, I see a clear restraint of
trade issue.

It is one thing for Mozilla to decide not to use certain data in the
certificate, quite another to prohibit CAs from providing that data to
other parties.

The domain validation case is entirely different because the question there
is how data Mozilla intends to rely on is validated.


A better way to state the requirement is that CAs should only issue
>>>> logotypes after CABForum has agreed validation criteria. But I think that
>>>> would be a mistake at this point because we probably want to have
>>>> experience of running the issue process before we actually try to
>>>> standardize it.
>>>>
>>>>
>>> I would be amenable to adding language that permits the use of the
>>> Logotype extension after the CAB Forum has adopted rules governing its use.
>>> I don't see that as a material change to my proposal because, either way,
>>> we have the option to change Mozilla's position based on our assessment of
>>> the rules established by the CAB Forum, as documented in policy section 2.3
>>> "Baseline Requirements Conformance".
>>>
>>> I do not believe that changing the "MUST NOT" to "SHOULD NOT" reflects
>>> the consensus reached in this thread.
>>>
>>> I also do not believe that publicly-trusted certificates are the safe
>>> and prudent vehicle for "running the issue process before we actually try
>>> to standardize it".
>>>
>>
>> You are free to ignore any information in a certificate. But if you
>> attempt to limit information in the certificate you are not intending to
>> use in your product, you are arguably crossing the line.
>>
>>
> It's quite clear from the discussions I've been involved in that at least
> one goal for Logotypes is that Browsers process them.  You implied so
> yourself above by stating that this proposal would "prevent other browser
> providers making use of LogoTypes." So you are now suggesting that Browsers
> ignore this information while others are suggesting precisely the opposite.
>

Mozilla is free to make the choice to ignore it. If you want to go ahead
and use your significant market power to prevent the logotypes being added
to other browsers to use them and are confident that it is compliant with
US anti-trust law, EU competition law (and the 27 member states) plus any
other state you may have picked a fight with recently, well go ahead.

In case you hadn't noticed, there is a storm brewing over 'big-tech' on
capitol hill. It is not yet clear which issues are going to be picked up or
by whom. It is not certain that the WebPKI will be the focus of that but I
would not count on avoiding it. It would be prudent for every party with
significant market power to constantly ask themselves if they would be
comfortable 

Fwd: Logotype extensions

2019-07-11 Thread Phillip Hallam-Baker via dev-security-policy
[Fixing the From to match list membership]

On Wed, Jul 10, 2019 at 2:41 PM housley--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, July 5, 2019 at 7:53:45 PM UTC-4, Wayne Thayer wrote:
> > Based on this discussion, I propose adding the following statement to the
> > Mozilla Forbidden Practices wiki page [1]:
> >
> > ** Logotype Extension **
> > Due to the risk of misleading Relying Parties and the lack of defined
> > validation standards for information contained in this field, as
> discussed
> > here [2], CAs MUST NOT include the RFC 3709 Logotype extension in CA or
> > Subscriber certificates.
> >
> > Please respond if you have concerns with this change. As suggested in
> this
> > thread, we can discuss removing this restriction if/when a robust
> > validation process emerges.
> >
> > - Wayne
> >
> > [1] https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices
> > [2]
> >
> https://groups.google.com/d/msg/mozilla.dev.security.policy/nZoK5akw2c8/ZtF0WZY8AgAJ
>
> People find logos very helpful.  That is why many browsers display a tiny
> logo in the toolbar.
>
> I would suggest that a better way forward is to start the hard work on the
> validation process.  It will not be difficult for that to become more
> robust and accessible than the logos in the toolbar.
>

[I am not currently employed by a CA. Venture Cryptography does not operate
one or plan to.]

I agree with Russ.

The Logotype extension has technical controls to protect the integrity of
the referenced image by means of a digest value.

I do find the discussion of the usability factors rather odd when I am
looking at my browser tabs decorated with completely unauthenticated
favicons. Why is it that browser providers have no problem putting that
information in front of users?

If Mozilla or Chrome or the like don't see the value of using the logotype
information, don't. But if you were to attempt to prevent others making use
of this capability, that looks a lot like anti-Trust to me.

The validation scheme I proposed when we discussed this some years back was
to build on the Madrid Treaty for registration of trademarks. International
business is already having to deal with the issue of logos being used in
multiple jurisdiction. It is a complex, difficult problem but one that the
international system is very much aware of and working to address. They
will take time but we can leave the hard issues to them.

I see multiple separate security levels here:

1) Anyone can create a Web page that appears to look like Ethel's Bank

2) Ethel's Bank Carolina and Ethel's Bank Spain both have trademarks in
their home countries and can register credentials showing they are Ethel's
Bank.

3) When someone goes to Ethel's Bank online they are assured that it is the
canonical Ethel's Bank and no other.

There are obvious practical problems that make (3) unreachable. Not least
the fact that one of the chief reasons that trademarks are often fractured
geographically is that they were once part of a single business that split.
Cadbury's chocolate sold in the US is made by a different company to that
sold in the UK which is why some people import the genuine article at
significant expense.

But the security value lies in moving from level 1 to level 2. Stopping a
few million Internet thieves easily setting up fake web sites that look By
Ethel's bank is the important task. The issue of which Ethel's Bank is the
real one is something they can sort out (expensively) between themselves,
20 paces with loaded lawyers.


For the record, I am not sure that we can graft logotypes onto the current
Web browser model as it stands. I agree with many of Ryan's criticisms, but
not his conclusions. Our job is to make the Internet safe for the users. I
am looking at using logotypes but in a very different interaction model.
The Mesh does have a role for CAs but it is a very different role.

I will be explaining that model elsewhere. But the basic idea here is that
the proper role of the CA is primarily as an introducer. One of the reasons
the Web model is fragile today is that every transaction is essentially
independent as far as the client is concerned. The server has cookies that
link the communications together but the client starts from scratch each
time.

So imagine that I have a Bookmarks catalog that I keep my financial service
providers in and this acts as a local name provider for all of my Internet
technology. When I add Ethel's bank to my Bookmarks catalog, I see the
Ethel's bank logo as part of that interaction. A nice big fat logo, not a
small one. And I give it my pet name 'Ethel'. And when I tell Siri, or
Alexa or Cortana, 'call ethel', it call's Ethel's bank for me. Or if I type
'Ethel' into a toolbar, that is the priority.

Given where we have come from, the CA will have to continue to do the trust
management part of the WebPKI indefinitely. And I probably want the CA to
also have the role of warning me when a party I 

Fwd: Logotype extensions

2019-07-11 Thread Phillip Hallam-Baker via dev-security-policy
On Wed, Jul 10, 2019 at 4:54 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Russ,
>
> >
> Perhaps one of us is confused because I think we're saying the same thing -
> that  rules around inclusion of Logotype extensions in publicly-trusted
> certs should be in place before CAs begin to use this extension.
>

I don't see how your proposed ban on logotypes is consistent. What that
would do is set up a situation in which it was impossible for CABForum to
develop rules for logotypes because one of the browsers had already banned
their use.

A better way to state the requirement is that CAs should only issue
logotypes after CABForum has agreed validation criteria. But I think that
would be a mistake at this point because we probably want to have
experience of running the issue process before we actually try to
standardize it.

I can't see Web browsing being the first place people are going to use
logotypes. I think they are going to be most useful in other applications.
And we actually have rather a lot of those appearing right now. But they
are Applets consisting of a thin layer on top of a browser and the logotype
stuff is relevant to the thin layer rather than the substrate.


For example, I have lots of gadgets in my house. Right now, every different
vendor who does an IoT device has to write their own app and run their own
service. And the managers are really happy with that at the moment because
they see it as all upside.

I think they will soon discover that most devices that are being made to
Internet aren't actually very useful if the only thing they connect to is a
manufacturer site and those start to cost money to run. So I think we will
end up with an open interconnect approach to IoT in the end regardless of
what a bunch of marketing VPs think should happen. Razor and blades models
are really profitable but they are also vanishingly rare because the number
2 and 3 companies have an easy way to enter the market by opening up.

Authenticating those devices to the users who bought them, authenticating
the code updates. Those are areas where the logotypes can be really useful.


-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Logotype extensions

2019-07-11 Thread Phillip Hallam-Baker via dev-security-policy
[Fixing the From]

n Wed, Jul 10, 2019 at 6:11 PM Wayne Thayer  wrote:

> On Wed, Jul 10, 2019 at 2:31 PM Phillip Hallam-Baker <
> ph...@hallambaker.com> wrote:
>
>> On Wed, Jul 10, 2019 at 4:54 PM Wayne Thayer via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> Russ,
>>>
>>> >
>>> Perhaps one of us is confused because I think we're saying the same
>>> thing -
>>> that  rules around inclusion of Logotype extensions in publicly-trusted
>>> certs should be in place before CAs begin to use this extension.
>>>
>>
>> I don't see how your proposed ban on logotypes is consistent. What that
>> would do is set up a situation in which it was impossible for CABForum to
>> develop rules for logotypes because one of the browsers had already banned
>> their use.
>>
>>
> How exactly does a Browser banning the use of an extension prevent the CAB
> Forum from developing rules to govern the use of said extension? If
> anything, it would seem to encourage the CAB Forum to take on that work.
> Also, as has been discussed, it is quite reasonable to argue that the
> inclusion of this extension is already forbidden in a BR-compliant
> certificate.
>

Because then the Mozilla ban will be used to prevent any work on logotypes
in CABForum and the lack of CABForum rules will be used as pretext for not
removing the ban.

I have been doing standards for 30 years. You know this is exactly how that
game always plays out.

If you don't want to use the extension, that is fine. But if you attempt to
prohibit anything, ruin it by your lawyers first and ask them how it is not
an a restriction on trade.

It is one thing for CABForum to make that requirement, quite another for
Mozilla to use its considerable market power to prevent other browser
providers making use of LogoTypes.




> A better way to state the requirement is that CAs should only issue
>> logotypes after CABForum has agreed validation criteria. But I think that
>> would be a mistake at this point because we probably want to have
>> experience of running the issue process before we actually try to
>> standardize it.
>>
>>
> I would be amenable to adding language that permits the use of the
> Logotype extension after the CAB Forum has adopted rules governing its use.
> I don't see that as a material change to my proposal because, either way,
> we have the option to change Mozilla's position based on our assessment of
> the rules established by the CAB Forum, as documented in policy section 2.3
> "Baseline Requirements Conformance".
>
> I do not believe that changing the "MUST NOT" to "SHOULD NOT" reflects the
> consensus reached in this thread.
>
> I also do not believe that publicly-trusted certificates are the safe and
> prudent vehicle for "running the issue process before we actually try to
> standardize it".
>

You are free to ignore any information in a certificate. But if you attempt
to limit information in the certificate you are not intending to use in
your product, you are arguably crossing the line.




> I can't see Web browsing being the first place people are going to use
>> logotypes. I think they are going to be most useful in other applications.
>> And we actually have rather a lot of those appearing right now. But they
>> are Applets consisting of a thin layer on top of a browser and the logotype
>> stuff is relevant to the thin layer rather than the substrate
>>
>
> If the use case isn't server auth or email protection, then publicly
> trusted certificates shouldn't be used. Full stop. How many times do we
> need to learn that lesson?
>

That appears to be an even more problematic statement. There have always
been more stakeholders than just the browser providers on the relying
applications side.

Those applets are competing with your product. Again, talk to your legal
people. If you use your market power to limit the functionalities that your
competitors can offer, you are going to have real problems.




-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-10 Thread Phillip Hallam-Baker via dev-security-policy
On Wed, Jul 10, 2019 at 6:11 PM Wayne Thayer  wrote:

> On Wed, Jul 10, 2019 at 2:31 PM Phillip Hallam-Baker <
> ph...@hallambaker.com> wrote:
>
>> On Wed, Jul 10, 2019 at 4:54 PM Wayne Thayer via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> Russ,
>>>
>>> >
>>> Perhaps one of us is confused because I think we're saying the same
>>> thing -
>>> that  rules around inclusion of Logotype extensions in publicly-trusted
>>> certs should be in place before CAs begin to use this extension.
>>>
>>
>> I don't see how your proposed ban on logotypes is consistent. What that
>> would do is set up a situation in which it was impossible for CABForum to
>> develop rules for logotypes because one of the browsers had already banned
>> their use.
>>
>>
> How exactly does a Browser banning the use of an extension prevent the CAB
> Forum from developing rules to govern the use of said extension? If
> anything, it would seem to encourage the CAB Forum to take on that work.
> Also, as has been discussed, it is quite reasonable to argue that the
> inclusion of this extension is already forbidden in a BR-compliant
> certificate.
>

Because then the Mozilla ban will be used to prevent any work on logotypes
in CABForum and the lack of CABForum rules will be used as pretext for not
removing the ban.

I have been doing standards for 30 years. You know this is exactly how that
game always plays out.

If you don't want to use the extension, that is fine. But if you attempt to
prohibit anything, ruin it by your lawyers first and ask them how it is not
an a restriction on trade.

It is one thing for CABForum to make that requirement, quite another for
Mozilla to use its considerable market power to prevent other browser
providers making use of LogoTypes.




> A better way to state the requirement is that CAs should only issue
>> logotypes after CABForum has agreed validation criteria. But I think that
>> would be a mistake at this point because we probably want to have
>> experience of running the issue process before we actually try to
>> standardize it.
>>
>>
> I would be amenable to adding language that permits the use of the
> Logotype extension after the CAB Forum has adopted rules governing its use.
> I don't see that as a material change to my proposal because, either way,
> we have the option to change Mozilla's position based on our assessment of
> the rules established by the CAB Forum, as documented in policy section 2.3
> "Baseline Requirements Conformance".
>
> I do not believe that changing the "MUST NOT" to "SHOULD NOT" reflects the
> consensus reached in this thread.
>
> I also do not believe that publicly-trusted certificates are the safe and
> prudent vehicle for "running the issue process before we actually try to
> standardize it".
>

You are free to ignore any information in a certificate. But if you attempt
to limit information in the certificate you are not intending to use in
your product, you are arguably crossing the line.




> I can't see Web browsing being the first place people are going to use
>> logotypes. I think they are going to be most useful in other applications.
>> And we actually have rather a lot of those appearing right now. But they
>> are Applets consisting of a thin layer on top of a browser and the logotype
>> stuff is relevant to the thin layer rather than the substrate
>>
>
> If the use case isn't server auth or email protection, then publicly
> trusted certificates shouldn't be used. Full stop. How many times do we
> need to learn that lesson?
>

That appears to be an even more problematic statement. There have always
been more stakeholders than just the browser providers on the relying
applications side.

Those applets are competing with your product. Again, talk to your legal
people. If you use your market power to limit the functionalities that your
competitors can offer, you are going to have real problems.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-10 Thread Phillip Hallam-Baker via dev-security-policy
On Wed, Jul 10, 2019 at 4:54 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Russ,
>
> >
> Perhaps one of us is confused because I think we're saying the same thing -
> that  rules around inclusion of Logotype extensions in publicly-trusted
> certs should be in place before CAs begin to use this extension.
>

I don't see how your proposed ban on logotypes is consistent. What that
would do is set up a situation in which it was impossible for CABForum to
develop rules for logotypes because one of the browsers had already banned
their use.

A better way to state the requirement is that CAs should only issue
logotypes after CABForum has agreed validation criteria. But I think that
would be a mistake at this point because we probably want to have
experience of running the issue process before we actually try to
standardize it.

I can't see Web browsing being the first place people are going to use
logotypes. I think they are going to be most useful in other applications.
And we actually have rather a lot of those appearing right now. But they
are Applets consisting of a thin layer on top of a browser and the logotype
stuff is relevant to the thin layer rather than the substrate.


For example, I have lots of gadgets in my house. Right now, every different
vendor who does an IoT device has to write their own app and run their own
service. And the managers are really happy with that at the moment because
they see it as all upside.

I think they will soon discover that most devices that are being made to
Internet aren't actually very useful if the only thing they connect to is a
manufacturer site and those start to cost money to run. So I think we will
end up with an open interconnect approach to IoT in the end regardless of
what a bunch of marketing VPs think should happen. Razor and blades models
are really profitable but they are also vanishingly rare because the number
2 and 3 companies have an easy way to enter the market by opening up.

Authenticating those devices to the users who bought them, authenticating
the code updates. Those are areas where the logotypes can be really useful.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-10 Thread Phillip Hallam-Baker via dev-security-policy
On Wed, Jul 10, 2019 at 2:41 PM housley--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Friday, July 5, 2019 at 7:53:45 PM UTC-4, Wayne Thayer wrote:
> > Based on this discussion, I propose adding the following statement to the
> > Mozilla Forbidden Practices wiki page [1]:
> >
> > ** Logotype Extension **
> > Due to the risk of misleading Relying Parties and the lack of defined
> > validation standards for information contained in this field, as
> discussed
> > here [2], CAs MUST NOT include the RFC 3709 Logotype extension in CA or
> > Subscriber certificates.
> >
> > Please respond if you have concerns with this change. As suggested in
> this
> > thread, we can discuss removing this restriction if/when a robust
> > validation process emerges.
> >
> > - Wayne
> >
> > [1] https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices
> > [2]
> >
> https://groups.google.com/d/msg/mozilla.dev.security.policy/nZoK5akw2c8/ZtF0WZY8AgAJ
>
> People find logos very helpful.  That is why many browsers display a tiny
> logo in the toolbar.
>
> I would suggest that a better way forward is to start the hard work on the
> validation process.  It will not be difficult for that to become more
> robust and accessible than the logos in the toolbar.
>

[I am not currently employed by a CA. Venture Cryptography does not operate
one or plan to.]

I agree with Russ.

The Logotype extension has technical controls to protect the integrity of
the referenced image by means of a digest value.

I do find the discussion of the usability factors rather odd when I am
looking at my browser tabs decorated with completely unauthenticated
favicons. Why is it that browser providers have no problem putting that
information in front of users?

If Mozilla or Chrome or the like don't see the value of using the logotype
information, don't. But if you were to attempt to prevent others making use
of this capability, that looks a lot like anti-Trust to me.

The validation scheme I proposed when we discussed this some years back was
to build on the Madrid Treaty for registration of trademarks. International
business is already having to deal with the issue of logos being used in
multiple jurisdiction. It is a complex, difficult problem but one that the
international system is very much aware of and working to address. They
will take time but we can leave the hard issues to them.

I see multiple separate security levels here:

1) Anyone can create a Web page that appears to look like Ethel's Bank

2) Ethel's Bank Carolina and Ethel's Bank Spain both have trademarks in
their home countries and can register credentials showing they are Ethel's
Bank.

3) When someone goes to Ethel's Bank online they are assured that it is the
canonical Ethel's Bank and no other.

There are obvious practical problems that make (3) unreachable. Not least
the fact that one of the chief reasons that trademarks are often fractured
geographically is that they were once part of a single business that split.
Cadbury's chocolate sold in the US is made by a different company to that
sold in the UK which is why some people import the genuine article at
significant expense.

But the security value lies in moving from level 1 to level 2. Stopping a
few million Internet thieves easily setting up fake web sites that look By
Ethel's bank is the important task. The issue of which Ethel's Bank is the
real one is something they can sort out (expensively) between themselves,
20 paces with loaded lawyers.


For the record, I am not sure that we can graft logotypes onto the current
Web browser model as it stands. I agree with many of Ryan's criticisms, but
not his conclusions. Our job is to make the Internet safe for the users. I
am looking at using logotypes but in a very different interaction model.
The Mesh does have a role for CAs but it is a very different role.

I will be explaining that model elsewhere. But the basic idea here is that
the proper role of the CA is primarily as an introducer. One of the reasons
the Web model is fragile today is that every transaction is essentially
independent as far as the client is concerned. The server has cookies that
link the communications together but the client starts from scratch each
time.

So imagine that I have a Bookmarks catalog that I keep my financial service
providers in and this acts as a local name provider for all of my Internet
technology. When I add Ethel's bank to my Bookmarks catalog, I see the
Ethel's bank logo as part of that interaction. A nice big fat logo, not a
small one. And I give it my pet name 'Ethel'. And when I tell Siri, or
Alexa or Cortana, 'call ethel', it call's Ethel's bank for me. Or if I type
'Ethel' into a toolbar, that is the priority.

Given where we have come from, the CA will have to continue to do the trust
management part of the WebPKI indefinitely. And I probably want the CA to
also have the role of warning me when a party I previously trusted has
defaulted in some way.

Re: question about DNS CAA and S/MIME certificates

2018-05-16 Thread Phillip Hallam-Baker via dev-security-policy
On Wednesday, May 16, 2018 at 2:16:14 AM UTC-4, Tim Hollebeek wrote:
> This is the point I most strongly agree with.
> 
> I do not think it's at odds with the LAMPS charter for 6844-bis, because I do 
> not think it's at odds with 6844.

Updating 6844 is easy. Just define the tag and specify scope for issue / 
issuewild / issueclient sensibly. 

But that is only half the job really. If we want to get S/MIME widely used, we 
have to do ACME for client certs and integrate it into the MUAs. Not difficult 
but something needing to be done. 

More difficult is working out what an S/MIME CA does, where organizational 
validation etc. adds value and how this relates to the OpenPGP way of doing 
things. 


It occurred to me last night that the difference between S/MIME and OpenPGP 
trust is that one if by reference and the other is by value. S/MIME is 
certainly the solution for Paypal like situations because the trust 
relationship is (usually) with Paypal, not the individual I am talking to. Key 
fingerprints have the advantage of binding to the person which may be an 
advantage for non organizational situations.

These are not disjoint sets of course and there is no reason to switch mail 
encryption technologies depending on the context in which we are communicating. 
I would rather add certificate capabilities to OpenPGP-as-deployed and/or 
S/MIME-as-deployed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: question about DNS CAA and S/MIME certificates

2018-05-15 Thread Phillip Hallam-Baker via dev-security-policy
When I wrote CAA, my intention was for it to apply to SSL/TLS certs only. I did 
not consider S/MIME certs to be relevant precisely because of the 
al...@gmail.com problem.

I now realize that was entirely wrong and that there is in fact great utility 
in allowing domain owners to control their domains (or not).

If gmail want to limit the issue of Certs to one CA, fine. That is a business 
choice they have made. If you want to have control of your online identity, you 
need to have your own personal domain. That is why I have hallambaker.com. All 
my mail is forwarded to gmail.com but I control my identity and can change mail 
provider any time I want.

One use case that I see as definitive is to allow paypal to S/MIME sign their 
emails. That alone could take a bite out of phishing. 

But even with gmail, the only circumstance I could see where a mail service 
provider like that would want to restrict cert issue to one CA would be if they 
were to roll out S/MIME with their own CA.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Implementing a SHA-1 ban via Mozilla policy

2016-11-07 Thread Phillip Hallam-Baker
Remember the DigiNotar incident? At the time, I thought that pulling the
DigiNotar roots was exactly the right thing to do. I didn't say so as it
isn't proper for people to be suggesting putting their competitors out of
business. But I thought it the right thing to do.

Not long after I was sitting in a conference at NIST listening to a talk on
how shutting down DigiNotar had shut down the port of Amsterdam and left
meat rotting on the quays etc. Ooops.

The WebPKI is a complicated infrastructure that is used in far more ways
than any of us is aware of. And when it was being developed it wasn't clear
what the intended scope of use was. So it isn't very surprising that it has
been used for a lot of things like point of sale terminals etc.

It is all very well saying that people shouldn't have done these things
after the facts are known. But right now, I don't see any program in place
telling people in the IoT space what they should be doing for devices that
can't be upgraded in the field.

None of the current browser versions support SHA-1. Yes, people could in
theory turn it back on for some browsers but that isn't an argument because
the same people can edit their root store themselves as well. Yes people
are still using obsolete versions of Firefox etc. but do we really think
that SHA-1 is the weakest point of attack?

If digest functions are so important, perhaps the industry should be
focusing on deployment of SHA-3 as a backup in case SHA-2 is found wanting
in the future.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-06-30 Thread Phillip Hallam-Baker
On Thu, Jun 30, 2016 at 12:46 PM, Juergen Christoffel <
juergen.christof...@zv.fraunhofer.de> wrote:

> On 30.06.16 18:24, Phillip Hallam-Baker wrote:
>
>> What makes something easy to hack in Perl does not make for good security
>> architecture.
>>
>
> Bad design, engineering or implementation is not primarily a problem of
> the language used. Or we would never have seen buffer overflows in C.
> Please castigate the implementor instead.


​My college tutor, Tony Hoare used his Turing Award acceptance speech to
warn people why that feature of C was a terrible architectural blunder.

If you are writing security code without strong type checking and robust
memory management with array bounds checking then you are doing it wrong.​
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-06-30 Thread Phillip Hallam-Baker
Argh

As with Etherium, the whole engineering approach gives me a cold sweat.
Security and scripting languages are not a good mix.

What makes something easy to hack in Perl does not make for good security
architecture.


:(



On Thu, Jun 30, 2016 at 11:30 AM, Rob Stradling 
wrote:

> https://www.computest.nl/blog/startencrypt-considered-harmful-today/
>
> Eddy, is this report correct?  Are you planning to post a public incident
> report?
>
> Thanks.
>
> --
> Rob Stradling
> Senior Research & Development Scientist
> COMODO - Creating Trust Online
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When good certs do bad things

2016-06-03 Thread Phillip Hallam-Baker
On Fri, Jun 3, 2016 at 2:03 PM, Nick Lamb  wrote:

> On Friday, 3 June 2016 17:25:11 UTC+1, Peter Kurrasch  wrote:
> > Regarding use of the term "bad", what does anyone think about this as an
> alternative: "furtherance of criminal activity"
>
> As far as I'm aware all of the following are examples of criminal activity:
>
> Gambling (in some but not all of the United States of America)
>
> Glorifying Adolf Hitler (in Germany).
>
> Advertising the availability of sexual services such as in-call
> prostitution (United Kingdom)
>
> Insulting the King of Thailand (Thailand)
>
> Maybe you personally don't think any of the above should be permitted on
> the World Wide Web. But this discussion is about the policy of Mozilla's
> trust store and not about you personally, so the question becomes whether
> any Mozilla users expect to be able to "further" any of these activities
> using Firefox and I think the unequivocal answer is yes, yes they do.
> ___
>

The original design of the WebPKI required authentication of the
organization for that exact reason.

If a company is registered in Germany, you probably expect it to follow
German laws. If you are buying from a company, the fact that they are
registered in Germany or Nigeria may affect the expectations you have for
performance of the contract - and the types of assurance you would require.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private PKIs, Re: Proposed limited exception to SHA-1 issuance

2016-02-29 Thread Phillip Hallam-Baker
On Mon, Feb 29, 2016 at 7:09 AM, Peter Gutmann
 wrote:
> Jürgen Brauckmann  writes:
>
>>Nice example from the consumer electronics world: Android >= 4.4 is quite
>>resistant against private PKIs. You cannot import your own/your corporate
>>private Root CAs for Openvpn- or Wifi access point security without getting
>>persistent, nasty, user-confusing warning messages: "A third party is capable
>>of monitoring your network activity".
>>
>>http://www.howtogeek.com/198811/ask-htg-whats-the-deal-with-androids-persistent-network-may-be-monitored-warning/
>
> Ugh, yuck!  So on the one hand we have numerous research papers showing that
> Android apps that blindly trust any old cert they find are a major problem,
> and then we have Google sabotaging any attempt to build a proper trust chain
> for Android apps.

Not just Android. Windows has all sorts of cool cert chain building
algorithms in their APIs. But they require the certificates to be
installed in the machine cert store.

Which makes them totally useless for my purposes in the Mesh as the
point is to give users a personal PKI with themselves as the root of
trust.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: [FORGED] Re: Nation State MITM CA's ?

2016-01-12 Thread Phillip Hallam-Baker
It really isn't a good idea for Mozilla to try to mitigate the
security concerns of people living in a police state. The cost of
doing so is you will set precedents that others demand be respected.

Yes providing crypto with a hole in it will be better than no crypto
at all for the people who don't have access to full strength crypto.
But if you go that route only crypto with a hole will be available.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: [FORGED] Re: Nation State MITM CA's ?

2016-01-12 Thread Phillip Hallam-Baker
On Tue, Jan 12, 2016 at 11:46 AM, Jakob Bohm <jb-mozi...@wisemo.com> wrote:
> On 12/01/2016 16:49, Phillip Hallam-Baker wrote:
>>
>> It really isn't a good idea for Mozilla to try to mitigate the
>> security concerns of people living in a police state. The cost of
>> doing so is you will set precedents that others demand be respected.
>>
>> Yes providing crypto with a hole in it will be better than no crypto
>> at all for the people who don't have access to full strength crypto.
>> But if you go that route only crypto with a hole will be available.
>>
>
> No one (except the MiTM CA itself, possibly) is suggesting that Mozilla
> include or authorize any MiTM CA to work in its browsers (or anything else
> using the Mozilla CA list).
>
> The discussion is how to *not* authorize it, without thereby causing too
> much collateral damage.

Yes, that is the issue we should be considering. The issue of
collateral damage isn't just limited to one set of governments though.
Anything we allow a police state, the FBI will demand and of course,
vice versa which is one of the reasons for rejecting the FBI demands.


> Questions being seriously discussed:
>
> - Should Mozilla add specific mechanisms that prevent the subjects of a
>  police state from obeying police orders to compromise their own
>  browser?  This is the most hotly debated topic in this thread.
>
> - Should Mozilla find a mechanism to allow only the non-MiTM part of a
>  dual use CA which is being used both as an MiTM CA and as the
>  legitimate CA for accessing government services, such as transit visa
>  applications by foreign travelers planning to cross the territory of
>  the police state on their way somewhere else?
>
> - How to most easily reject requests by the MiTM CAs to become
>  globally trusted CAs in the Mozilla CA store.  Without causing
>  precedents that would hurt legitimate CAs from countries that happen
>  not to be allies of the USA.  So far, the best suggestion (other than
>  to stall them on technicalities) is to find an interpretation of the
>  existing CA rules which cannot be satisfied by any MiTM CA.

Not accepting a demand, making clear a demand will never be accepted
is not the same as giving a refusal.

On the other questions, let us return to what the original basis for
the WebPKI was: Process.

There are existing precedents for revoking certificates that are
demonstrated to be malicious. One of the purposes of the CAA extension
was to provide an objective definition of malicious behavior. There
are at least two parties that have infrastructure that is capable of
detecting certificates that violate CAA constraints.

At the moment we don't have a very large number of domains with CAA
records. The more domain name holders we can persuade to deploy CAA,
the sooner an objective default will be detected.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Nation State MITM CA's ?

2016-01-11 Thread Phillip Hallam-Baker
On Mon, Jan 11, 2016 at 1:45 PM, Jakob Bohm  wrote:
> On 09/01/2016 19:22, Kai Engert wrote:
>>
>> On Sat, 2016-01-09 at 14:11 +, Peter Gutmann wrote:
>>>
>>> That would have some pretty bad consequences.  With the MITM CA cert
>>> enabled,
>>> Borat [0] can read every Kazakh user's email, but no-one else can.  With
>>> the
>>> MITM CA blacklisted, Borat can still read every Kazakh user's email, but
>>> so
>>> can everyone else on the planet.  So the choice is between privacy
>>> against
>>> everyone but one party, and privacy against no-one.
>>
>>
>> I don't understand why blacklisting a MITM CA would enable everyone to
>> read the
>> data that passes through the MITM. Could you please explain? (It sounds
>> like
>> there is either a misunderstanding on your or on my side.)
>>
>
> He is obviously referring to the fact that refusing to encrypt using
> the MiTM certificate would force users to access their e-mails (etc.)
> using unencrypted connections (plain HTTP, plain IMAP, plain POP3
> etc.), thus exposing themselves to wiretapping by parties other than
> the government in question.

That does not concern me. What does concern me is that a user of the
Web believes that their communications are encrypted when they are not.

The browser should break when communication is not possible without
interception by a third party. In this particular case the party has
demonstrated its intention to use the CA to create MITM certificates.
I suggest that as soon as evidence of such certificates is seen, the
CA be blacklisted.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2016-01-08 Thread Phillip Hallam-Baker
On Thu, Jan 7, 2016 at 2:00 PM, Kathleen Wilson  wrote:
> On 1/6/16 3:07 PM, Paul Wouters wrote:
>>
>>
>> As was in the news before, Kazakhstan has issued a national MITM
>> Certificate Agency.
>>
>> Is there a policy on what to do with these? While they are not trusted,
>> would it be useful to explicitely blacklist these, as to make it
>> impossible to trust even if the user "wanted to" ?
>>
>> The CA's are available here:
>> http://root.gov.kz/root_cer/rsa.php
>> http://root.gov.kz/root_cer/gost.php
>>
>> One site that uses these CA's is:
>> https://pki.gov.kz/index.php/en/forum/
>>
>> Paul
>
>
>
> Kazakhstan has submitted the request for root inclusion:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1232689
>
> So, we really do need to have this discussion now.
>
> I will appreciate thoughtful and constructive input into this discussion.

I suggest waiting until they name their auditors before processing the request.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal: Remove Code Signing Trust Bit

2015-10-02 Thread Phillip Hallam-Baker
On Fri, Oct 2, 2015 at 12:36 PM, Brian Smith  wrote:

> -- Forwarded message --
> From: Brian Smith 
> Date: Thu, Oct 1, 2015 at 7:15 AM
> Subject: Re: Policy Update Proposal: Remove Code Signing Trust Bit
> To: Gervase Markham 
> Cc: "kirk_h...@trendmicro.com" 
>
>
> On Wed, Sep 30, 2015 at 11:05 PM, Gervase Markham 
> wrote:
>
> > On 01/10/15 02:43, Brian Smith wrote:
> > > Perhaps nobody's is, and the whole idea of using publicly-trusted CAs
> for
> > > code signing and email certs is flawed and so nobody should do this.
> >
> > I think we should divide code-signing and email here. I can see how one
> > might make an argument that using Mozilla's list for code-signing is not
> > a good idea; a vendor trusting code-signing certs on their platform
> > should choose which CAs they trust themselves.
> >
> > But if there is no widely-trusted set of email roots, what will that do
> > for S/MIME interoperability?
> >
>
> First of all, there is a widely-trusted set of email roots: Microsoft's.
> Secondly, there's no indication that having a widely-trusted set of email
> roots *even makes sense*. Nobody has shown any credible evidence that it
> even makes sense to use publicly-trusted CAs for S/MIME. History has shown
> that almost nobody wants to use publicly-trusted CAs for S/MIME, or even
> S/MIME at all.
>
> Further, there's been actual evidence presented that Mozilla's S/MIME
> software is not trustworthy due to lack of maintenance. And, really, what
> does Mozilla even know about S/MIME? IIRC, most of the S/MIME stuff in
> Mozilla products was made by Sun Microsystems. (Note: Oracle acquired Sun
> Microsystems in January 2010. But, I don't remember any Oracle
> contributions related to S/MIME. So, yes, I really mean Sun Microsystems
> that hasn't even existed for almost 6 years.)
>

While working on PRISMPROOF email (details on that next week hopefully) I
asked round and was surprised to discover that the number of CA issued
S/MIME certs is about the same as the number of OpenPGP keys on the key
servers. Further the S/MIME users are paying for the cert which suggests it
is rather more likely they are using them.

And this does not count the DoD deployment or the parts of the GSA
deployment that are not outsourced.


One of the reasons it has been so hard to deploy end-to-end mail has been
the scorched earth policy of the advocates of both sides and a refusal to
accept that the other side actually had a use case.

If people are serious about trust models and not just posturing for the
sake of it, they are going to describe the model they use to evaluate the
trust provided. PKI uses cryptography but that is never the weakest link
for a well designed system and usually not the weakest link even in a badly
designed one.

The model I have used for the past 20 years is to consider the work factor
for creating a bogus certificate. That is the model I used when we built
the WebPKI. The Web PKI is predicated on the costs associated with
acquiring a certificate being greater than the value to an attacker.
Requiring a corporate registration is not an insuperable obstacle but it
imposes a known cost and doing that on a repeated basis without being
caught or leaving a tell-tale signal is expensive. The point of revocation
was to reduce the window of vulnerability for use of a bogus certificate so
as to limit the value to the attacker.

Now one of the problems in that system was that it worked too well. And so
people who should have known better decided they could shut off the
controls I and others had predicated the security model on. Then they
blamed us for the consequences.

There has only been one occasion when the WebPKI has not worked within the
design parameters and that was the DigiNotar attack.


Two years ago I extended my model to consider time. Because one of the
astonishing things about notary hash chains is that it is actually quite
easy to build one with a work factor for a backdating attack that can be
considered infinite.

I am aware of the limitations of the PKIX trust model for the general trust
problem but it does work well within the organizations that it is designed
to serve and who do in fact use it on a substantial scale. Most people who
are serious about OpenPGP and not merely playing editor wars accept the
fact that the Web Of Trust model does not scale. The Moore bound problem
prevents WOT alone achieving global scale.

If however you combine the CA issued cert model, the WOT model and notary
hash chains, it is not only possible to establish a robust, scaleable email
PKI, it is reasonably straightforward.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Policy Update Proposal -- Remove Email Trust Bit

2015-09-25 Thread Phillip Hallam-Baker
On Fri, Sep 25, 2015 at 8:47 AM, Peter Gutmann 
wrote:

> Eric Mill  writes:
>
> >can anyone lay out what the steps to doing that would look like so the
> S/MIME
> >community can react in more concrete ways?
>
> Well, first you'll have to tell the S/MIME community what it is you want
> them
> to do...
>

Would people be interested in the suggestion I have?

If we are going to get anywhere with end to end secure email, we need to

1) End the silly OpenPGP / S/MIME standards war

2) Adopt a design for end to end secure messaging that is as easy to use as
regular mail.

3) Design any infrastructure so there is a compelling adoption incentive
for users when market penetration is less than 5% [currently we have about
2 million users of S/MIME and the same of OpenPGP or about 0.1% of total
Internet users]

4) Support the fact that users now need to be able to read their mail on a
wide range of platforms.


I have code running in the lab that I think meets these needs. And I think
that there is a compelling reason for every necessary stakeholder to
participate:


*Users*: The ability to sent E2E mail to 0.1% of mail users is not a
compelling adoption incentive. A really good password manager that allows
the user all the benefits of a cloud based password manager without relying
on the cloud service for security is probably enough to get to critical
mass.


*S/MIME & OpenPGP Community*: Yes, I get that neither of you wants to admit
defeat. But S/MIME has deployment ubiquity and OpenPGP has mindshare. You
need each other.

Fortunately we are at a technology inflection point. The transition to ECC
is going to make everyone want to throw their existing schemes away and
replace them. Not because of the ECC change but because of Matt Blaze's
work on proxy re-encryption which does not fit well with RSA but fits
marvelously with ECDHE.


*Thunderbird*:

Right now it takes me 20 minutes to configure Thunderbird to do S/MIME. I
can do the same thing for Windows Live Mail with the click of one button.
Not because of what Microsoft has done but because I took the instructions
for applying for a cert and converted them into code.

In general any set of user instructions that does not involve any user
choice can be eliminated and replaced by code.



There is also a big opportunity. Remember what originally made Mozilla the
browser to use? It wasn't being open source, it was having tabbed browsing.
I think there is a similar opportunity here. One of the things I have
noticed with the Internet and Web is that many ideas are tried before their
time has come. I saw dozens of Facebook-like schemes before that particular
one took off. Part of that is execution but another part is that people
take time to adapt to the new technology and be ready for another dose. We
had blogs back in 1994. They only took off in the Fall of 2000.

Right now Thunderbird isn't useful for much more than reading mail. It can
in theory be used for RSS feeds and for NNTP news. But those are withering.

Back when the Web began there was a product called Lotus Notes that did a
lot of very interesting things. That was the application that many of the
Internet mail standards were originally developed to support.

I think we now have most of the pieces in place that make a different type
of mail client possible, one that is a message based workflow system. The
critical piece that is missing is a usable model for security.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Specify audit criteria according to trust bit

2015-09-22 Thread Phillip Hallam-Baker
On Tue, Sep 22, 2015 at 4:47 AM, Brian Smith  wrote:

> Kathleen Wilson  wrote:
>
> > Arguments for removing the Email trust bit:
> > - Mozilla's policies regarding Email certificates are not currently
> > sufficient.
> > - What else?
> >
> >
> * It isn't clear that S/MIME using certificates from publicly-trusted CAs
> is a model of email security that is worth supporting. Alternatives with
> different models exist, such a GPG and TextSecure. IMO, the TextSecure
> model is more in line with what Mozilla is about that the S/MIME model.
>

The idea that there is one trust model that meets every need is completely
wrong.

Hierarchical trust models meet the needs of hierarchical organizations very
well. When I last did a survey I was rather surprised to find that there
are actually the same number of CA issued S/MIME certs as on the OpenPGP
servers. And that ignores a huge deployment in the US military that isn't
visible to us.

Governments and many enterprises are hierarchical. Which makes that the
preferred trust model for government and business uses. If I get an email
from my broker I really want it to be from someone who is still a Fidelity
employee.

Hierarchical is not sufficient by itself which is why email clients should
not be limited to a single trust model. It should be possible to specify
S/MIME keys directly by fingerprint.


* It is better to spend energy improving TLS-related work than
> S/MIME-related stuff. The S/MIME stuff distracts too much from the TLS
> work.
>

The TLS model is server side authentication. Saying client side
authentication distracts from server side makes no sense to me.



> * We can simplify the policy and tighten up the policy language more if the
> policy only has to deal with TLS certificates.
>

You could save even more time if you stopped supporting Thunderbird.

If Mozilla isn't going to do Thunderbird right and keep it up to date, that
might be the right choice of course.


* Mozilla's S/MIME processing isn't well supported. Large parts of it are
> out of date and the people who maintain the certificate validation logic
> aren't required to keeping S/MIME stuff working. In particular, it is OK
> according to current development policies for us to change Gecko's
> certificate validation logic so that it works for SSL but doesn't
> (completely) work for S/MIME. So, basically, Mozilla doesn't implement
> software that can properly use S/MIME certificates, as far as we know.



> Just to make sure people understand the last point: I think it is great
> that people try to maintain Thunderbird. But, it was a huge burden on Gecko
> developers to maintain Thunderbird on top of maintaining Firefox, and some
> of us (including me, when I worked at Mozilla) lobbied for a policy change
> that let us do our work without consideration for Thunderbird. Thus, when
> we completely replaced the certificate verification logic in Gecko last
> year, we didn't check how it affected Thunderbird's S/MIME processing.
> Somebody from the Thunderbird maintenance team was supposed to do so, but I
> doubt anybody actually did. So, it would be prudent to assume that
> Thunderbird's S/MIME certificate validation is broken.
>

The Internet has two killer applications, Mail and the Web. I invented
WebMail (no really we had a court case with a patent troll and it turns out
that I did) and I don't think it is the right answer.

Right now there are problems with the specs for OpenPGP, and with S/MIME.
Both are examples of 90/10 engineering from the days when that was
sufficient. Today they just don't make the grade.


If people want to have an email infrastructure that is end-to-end secure,
offers all the capabilities of OpenPGP, and S/MIME is fully backwards
compatible and makes email and the Web easier to use then I have an
architecture that does exactly that.

If someone was willing to work with me and help me to integrate with
Thunderbird in the same way that I currently integrate with Windows Live
Mail (and Outlook to come) then we could open with support for all the
major desktop email clients.


At some point, I can do the same thing for WebMail, but it isn't possible
to meet all my goals there until we can move to ECC.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Requirements for CNNIC re-application

2015-04-15 Thread Phillip Hallam-Baker
CT is an accountability control, not an access control

We need both

Sent from my difference engine


 On Apr 14, 2015, at 18:05, Matt Palmer mpal...@hezmatt.org wrote:
 
 On Tue, Apr 14, 2015 at 01:38:55PM +0200, Kurt Roeckx wrote:
 On 2015-04-14 01:15, Peter Kurrasch wrote:
 Let's use an example. Suppose CNNIC issues a cert for whitehouse[dot]gov 
 and let's further suppose that CNNIC includes this cert in the CT data 
 since they have agreed to do that. What happens next?
 
 What I've been wondering about is whether we need a mechanism where the CT
 log should approve the transition from one issuer to an other.
 
 NO.  A CT log is a *log*, not a gatekeeper.
 
 - Matt
 
 ___
 dev-security-policy mailing list
 dev-security-policy@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Requirements for CNNIC re-application

2015-04-15 Thread Phillip Hallam-Baker
On Tue, Apr 14, 2015 at 8:09 AM, Kurt Roeckx k...@roeckx.be wrote:
 On 2015-04-14 13:54, Rob Stradling wrote:

 On 14/04/15 12:38, Kurt Roeckx wrote:

 On 2015-04-14 01:15, Peter Kurrasch wrote:

 Let's use an example. Suppose CNNIC issues a cert for
 whitehouse[dot]gov and let's further suppose that CNNIC includes this
 cert in the CT data since they have agreed to do that. What happens
 next?


 What I've been wondering about is whether we need a mechanism where the
 CT log should approve the transition from one issuer to an other.


 Kurt, isn't CAA (RFC6844) the tool for this job?


 I don't see everybody publishing that.  Or do you want to make it a
 requirement that everybody publishes such a record?

I think that it is from today that CAs are required to state whether
they do CAA or not in their CPS.

Anyone who does not implement CAA and then miss-issues just one cert
that should have been caught is going to look exceptionally stupid.


CAA tells CAs what they should not do
CT tells everyone whether or not they did it.

Those are the accountability controls.

In addition, HSTS and HPKP provide access controls which are currently
being distributed through the HTTP and pre-loaded lists and I have a
proposal for publishing the exact same info through the DNS as CAA
attributes.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Consequences of mis-issuance under CNNIC

2015-04-02 Thread Phillip Hallam-Baker
On Thu, Apr 2, 2015 at 11:05 AM, Kurt Roeckx k...@roeckx.be wrote:
 On 2015-04-02 16:34, Phillip Hallam-Baker wrote:

 Further no private key should ever be in a network accessible device
 unless the following apply:

 1) There is a path length constraint that limits issue to EE certs.
 2) It is an end entity certificate.

 Why 1)?

Can you state a use case that requires online issue of Key Signing Certs?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Name Constraints

2015-03-09 Thread Phillip Hallam-Baker
On Mon, Mar 9, 2015 at 11:38 AM, Michael Ströder mich...@stroeder.com
wrote:

 Ryan Sleevi wrote:
  Given that sites in consideration already have multiple existing ways to
  mitigate these threats (among them, Certificate Transparency, Public Key
  Pinning, and CAA),

 Any clients which already make use of CAA RRs in DNS?

 Or did you mean something else with the acronym CAA?

 Ciao, Michael.


Sites can use CAA. But the checking is not meant to happen in the client as
the client cannot know what the CAA records looked like when the cert was
issued.

A third party can check the CAA records for each new entry on a CT log
however. And I bet that every CA that implements CAA will immediately start
doing so in the hope of catching out their competitors.


CAA also provides an extensible mechanism that could be used for more
general key distribution if you were so inclined.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Tightening up after the Lenovo and Comodo MITM certificates.

2015-02-25 Thread Phillip Hallam-Baker
On Wed, Feb 25, 2015 at 8:59 AM, Peter Kurrasch fhw...@gmail.com wrote:

 I'm not sure I totally follow here because informed consent requires the
 ability to inform, and I don't think we have that yet.

 The way any attacker operates is to find gaps in a system and make use of
 them. In my questions I'm trying the same approach: what are some gaps in
 the Komodia solution and how might we exploit them ourselves?


There are multiple problems here. One of them is that what is obvious to
folk in the PKI community is not necessarily obvious to folk in the
Anti-Virus community. Another problem is that following the advice given
out by Harvard Business School and setting up separate arms-length
companies to work on speculative 'disruptive' products means that they are
operating without the usual QA processes you would expect of a larger
company.

I don't want to get into specifics at this point.

We can do finger pointing and blamestorming but what we really need is a
solution. I think informed consent is a major part of the problem.


Malware and crapware are a real problem. My problem with what Lenovo did
isn't just that the code they installed had bugs, it is that they installed
the stuff at all. If I pay $1,000 for a laptop, I do not expect the
manufacturer to fill the inside of the case with manure. It is clearly
worse if the manure carries a disease but the solution to the problem is to
not ship the manure at all rather than trying to pasteurize it.

So one part of the solution here is the Windows Signature edition program
which guarantees customers the crapware free computer they paid for.


Fixing the AV hole is harder. The problem as the Anti-Virus people see it
is how to scan data for potentially harmful content, whether it is mail or
documents or web pages. The AV world regards itself as being a part of the
trusted computing base and thus entitled to have full access to all data in
unencrypted form. AV code has from the start had a habit of hooking
operating system routines at a very low level and taking over the machine.

Now we in the PKI world have a rather different view here. We see the root
store as being the core of the trusted computing base and that the 'user'
should be the only party making changes. We do not accept the excuse that
an AV product is well intentioned. However recall that it was Symantec
bought VeriSign, not the other way round. We don't necessarily have the
leverage here.


The fundamental changeable aspect of the current model for managing the
root store is the lack of accountability or provenance. As a user I have
tools that tell me what roots are in the store but I have no idea how they
got there. On the Windows store (which I am most familiar with), don't have
any way to distinguish between roots from the Microsoft program and those
added by programs.

One quick fix here would be for all trust root managers to use the CTL
mechanism defined by Microsoft (and pretty much a defacto standard) to
specify the trusted roots in their program, thus enabling people to write
tools that would make it easy to see that this version of Firefox has the
200+ keys from the program plus these other five that are not in the
program.


Right now it takes a great deal of expertise to even tell if a machine has
been jiggered or not. That is the first step to knowing if the jiggering is
malicious or not and done competently or not.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DSA certificates?

2014-12-23 Thread Phillip Hallam-Baker
DSA was the mandatory to implement algorithm originally since that was out
of patent earlier than RSA.

I would like to kill as many unused crypto implementations as possible. The
algorithm might be sound but an implementation that has never been used in
practice is a huge liability.




On Tue, Dec 23, 2014 at 3:31 AM, Peter Gutmann pgut...@cs.auckland.ac.nz
wrote:

 Ryan Sleevi ryan-mozdevsecpol...@sleevi.com writes:

 (and for sure, Microsoft's stack _does_ implement it,

 Does anyone know the motivation for this?  MS also implemented support for
 X9.42 certificates, which no-one has ever seen in the wild, but it was in
 receive-only mode (it would never generate data using them) and was done
 solely in order to avoid any accusations that they weren't following
 standards
 (there was this antitrust thing going on at the time).  So having it
 present
 in a MS implementation doesn't necessarily mean that it's used or
 supported,
 merely that it's, well, present in a MS implementation.

 (I'm just curious, wondering what the story behind this one is).

 Peter.
 ___
 dev-security-policy mailing list
 dev-security-policy@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New free TLS CA coming

2014-11-20 Thread Phillip Hallam-Baker
On Thu, Nov 20, 2014 at 6:22 AM, Richard Barnes rbar...@mozilla.com wrote:
 I am from Mozilla, and the replies here are exactly right.  From the 
 perspective of the Mozilla root CA program, Let's Encrypt will be treated as 
 any other applicant, should they choose to apply.  No immediate acceptance, 
 no less audited -- same audit requirements and application process as 
 everyone else.

I don't see the issue here. Comodo has been giving away certs for 8
years now. So have other CAs. Mozilla has known about that. It has
never been raised as an issue at roll over.

The issue with CACert wasn't that they were refused, they withdrew
their application after they realized that they were never going to
meet the audit criteria.

The only different thing here is that this time there is a proposal
for an automated enrollment protocol as well and presumably a
commitment to implementing it.

I have been calling for an automated enrollment protocol for quite a
while. This is the one I wrote for PRISM-PROOF email:

http://tools.ietf.org/html/draft-hallambaker-omnipublish-00


I was considering a wide range of scenarios ranging from EV certs to
certs for the coffee pot. Paid, unpaid, strong validation, DV, etc. My
model is subtly different but that was in part because I have worked
with Stephen Farrell, the current Security AD on five different
enrollment protocols over the years and I wanted to avoid the 'what
again?' response.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Client certs

2014-10-20 Thread Phillip Hallam-Baker
A relevant point here is that one of the main reasons for the difficulty in
using client certs was a preposterous patent claim to the implementation of
RSA in a hardware device with a USB serial interface.

I kid you not.

That might not be as much of an issue these days. The patent might have
expired and even if it hasn't a sequence of recent SCOTUS rulings have made
those sorts of claims difficult to support.

But then again, since USB tokens are being replaced by smart phones anyway,
perhaps even that is irrelevant.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Client certs

2014-10-06 Thread Phillip Hallam-Baker
On Thu, Sep 25, 2014 at 8:29 AM, Gervase Markham g...@mozilla.org wrote:
 A question which occurred to me, and I thought I'd put before an
 audience of the wise:

 * What advantages, if any, do client certs have over number-sequence
   widgets such as e.g. the HSBC Secure Key, used with SSL?

 http://www.hsbc.co.uk/1/2/customer-support/online-banking-security/secure-key

 It seems like they have numerous disadvantages (some subjective):

 * Client certs can be invisibly stolen if a machine is compromised
 * Client certs are harder to manage and reason about for an average
   person
 * Client certs generally expire and need replacing, with no warning
 * Client certs are either single-machine, or need a probably-complex
   copying process

 What are the advantages?

Going back to this thread because nobody seems to have addressed the
real issue - usability.

Right now I am working on email encryption but solving the usability
issues of email requires a general solution so I have worked on these
issues as well. And I think I have solved them.


Passwords have terrible security because they shift the cost to
someone who does not care about security of the asset because they
don't own it. I use the same password for my Washington Post, New York
Times, Slashdot, etc. accounts and it was leaked over five years ago.
I do not care because it isn't my asset involved. So the passwords get
forgotten and no password is better than the security of the recovery
system which is SMTP.

Passwords are also horribly insecure because they are disclosed to
validate them. Now we could solve this with some clever crypto scheme
but why bother when its actually easier to design a sensible PKI
scheme.

Passwords also have pretty horrid usability. But they get away with it
because implementation of client certificates is really, really bad.

One time tokens are pretty horrid usability as well. You have to carry
the thing about with you. Which I won't do unless I am paid. So most
of those schemes are migrating into smart phones. TAAA-DAAA! We are
now emulating 1970s technology with a computer that would have been
supercomputer class in the 1990s.

There is a much better way to use a smartphone, send it a message that
asks Do you want to pay your gas bill of $503.43? and have the user
answer yes or no and the app return a signed message.


I am currently working on making S/MIME and PGP email really really
easy to use. As in no more effort to use than regular email. As part
of that I have written a tool that creates and installs a certificate.
For a self signed cert the process is just run the configurator tool
and its done. For a CA issued cert, the user will specify the domain
name of the CA and it is done. Not even email links to click on
because the configurator has control of their mail app and can do the
challenge/response automatically. [There are other validation schemes
supported but I won't go into them here]

What I have taken out is all the makework that the user currently has
to suffer. And this is not just bad for Thunderbird, it is poke the
user in the eye with a sharp stick bad. It literally takes a quarter
of an hour to go through all the incantations. And that is me doing
it, I know what I am doing. I would expect no more than 60% of users
to follow instructions correctly. And all the effort is complete
makework. The user has to pull the certificate out of the Windows
store and install it in T-bird. Oh and repeat once a year.

Client SSL certs are just as bad and in addition the use interface is
horrid like it is on every other browser.

The basic problem with most Internet crypto is that the implementation
is 'enterprise grade'. By which I mean terrible usability because the
person who decides what stuff to buy will never use it.

The problems don't require a usability lab to sort out either. In fact
DON'T go to the lab. If the user is being given work to do then the
design is wrong. I don't need to test out my configurator in the
usability lab because there isn't a user interface to test.


OK so how do we solve the usability issues Gerv raised?

* Certs expire after 1 year
* Transferring keys between devices?

Answer: We don't. Look again at the requirements. What are the use
cases that drive them? I can't see any driver for enterprise or
consumer deployments. I can't even see a need to do either in the
government case, but the first is probably inherited from NSA
doctrine.

First step to sanity is that authentication keys are only ever in one
machine. If a user has two machines then they have two keys. If they
have eight machines then they have eight keys. This solves two
problems, first the key transport problem, second a large chunk of the
revocation problem. If a device is lost we only need to affect one
device, not every device.

[Decryption keys are another matter, there are good reasons to have a
single decryption key in multiple devices. And the reason that I got
into the per device authentication keys in 

Re: Short-lived certs

2014-09-05 Thread Phillip Hallam-Baker
On Fri, Sep 5, 2014 at 5:30 AM, Gervase Markham g...@mozilla.org wrote:
 On 04/09/14 14:25, Rob Stradling wrote:
 When attempting to access an HTTPS site with an expired cert on Firefox
 32, you'll see a This Connection is Untrusted page that lets you add
 an exception and proceed.

 But when attempting to access an HTTPS site with a revoked cert, you'll
 see Secure Connection Failed and Firefox 32 does NOT let you proceed.

 Would it make sense to treat expired certs in the same way as revoked
 certs?  (And if not, why not?)

 Logically, it does make sense. In practice, revocation has a near-zero
 false-positive rate, whereas expired sadly has a much greater
 false-positive rate. Which is why Firefox treats them differently.

Which means that expired short lived certs probably need to be treated
differently.

We probably need to mark them in some way as being intended to be
short lived. And we certainly need to fix the problem of getting them
renewed efficiently.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Short-lived certs

2014-09-05 Thread Phillip Hallam-Baker
+1

Short lifetime certs don't solve every problem with revocation. But they
allow us to drastically reduce the problem space. That makes applying other
mechanisms viable.

The end goal has to be to reduce the time window of vulnerability to the
time it takes people to respond to phishing sites and other attacks. That
is minutes, not days.

We are not going to get there soon. But that is where we have to aim for.




On Fri, Sep 5, 2014 at 12:43 PM, fhw...@gmail.com wrote:

 Hi Gerv, you've been busy!

 The cases Jeremy identified (thanks, Jeremy!) are all good problems to
 address and while I'm not unsympathetic I don't necessarily find them all
 that compelling. The situations involving network meddling by someone in
 power is especially troubling and goes beyond what I'm interested in
 covering in this discussion.

 That said, the case for performance ‎is troubling for a couple reasons.
 First is that I've seen many times where someone says (whatever) would
 work so much better if we could bypass this security stuff. I don't mean
 to suggest that people who want small cert chains are wanting to bypass
 security but a practice such as this does open the door for people who
 might consider such things. So my initial concern is where this might lead,
 and what protections might be needed to ensure it doesn't go further.

 The bigger problem I have with this, however, really has nothing to do
 with people who have good server c‎onfigs and are competent server admins.
 In such cases, we can probably assume there are likely to be fewer mistakes
 made and thus less of an impact to security.

 My problem is what happens when the cert holder loses control of the
 private key, no matter what the reason is. Relying on the expiration date
 is only a partial answer for 2 reasons: 1) a user might choose to allow the
 expired cert with the compromised key anyway (hence my asking about its
 treatment); and 2) a short expiry might still be long enough to cause harm.
 Consider that a phishing site might only exist for 2 days, just as an
 example.

 So in order to safely proceed with a small cert solution I think we need
 to flesh out how key compromises can be mitigated.


   *From: *Gervase Markham
 *Sent: *Friday, September 5, 2014 4:47 AM
 *To: *fhw...@gmail.com; Jeremy Rowley;
 mozilla-dev-security-pol...@lists.mozilla.org
 *Subject: *Re: Short-lived certs

 On 04/09/14 19:32, fhw...@gmail.com wrote:
  Could you (or anyone) elaborate a bit on the use cases where short
  lived certs are desirable?

 See other messages in this thread - it saves a significant amount of
 setup time not to have to wait for a response from the OCSP server.

  I'm also wondering what the plan is for handling an expired short
  term cert. Will the user be given a choice of allowing an exception
  or does it get special handling?

 What if I say it's treated the same as any other expired cert?

 Gerv



 ___
 dev-security-policy mailing list
 dev-security-policy@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Short-lived certs

2014-09-04 Thread Phillip Hallam-Baker
On Thu, Sep 4, 2014 at 6:43 PM, Ryan Sleevi
ryan-mozdevsecpol...@sleevi.com wrote:
 On Thu, September 4, 2014 11:20 am, Phillip Hallam-Baker wrote:
  Some constraints:

  1) Any new scheme has to work equally well with legacy browsers and
  enabled browsers.

 Sure. However, this requires a definition of legacy.


  2) Ditto for legacy servers and this is actually a harder problem as
  upgrading a server can force a complete redesign if they are using a
  middleware layer that has changed radically.

 Respectfully, Phillip, I disagree. CAs MAY offer such short-lived certs as
 an option. No one's requiring they exclusively limit issuance to it.
 There's no concern for legacy servers. If you're a legacy server, you
 don't use this. It's that simple.

It is still a problem.

The point I am trying to get across here is that there are very few
good reasons for an end user sticking to an obsolete browser and
almost all would upgrade given the choice. This is not the case for
servers and there are lots of folk who will complain if they are
forced to upgrade their server because that might require them to
change their PHP version which in turn requires them to completely
rework a ton of spaghetti code piled on top.


  Because of (1), the AIA field is going to have to be populated in EV
  certs for a very long time and so we probably don't need to raise any
  of this in CABForum right now. Lets do the work then let them follow
  the deployment. A browser doesn't have to check the AIA field just
  because it is there.

 I'm not sure I agree with your conclusion for 1. As noted elsewhere, a
 short-lived cert is effectively the same as the maximal attack window for
 a revocation response. That's it. The AIA can be dropped if they're
 equivalent.

It can be dropped as far as security is concerned. But that is only
going to save a few bytes and might cause legacy issues. So why make
being allowed to drop it a major concern at this point?

Dropping AIA is useful for the CA as I don't need to bother with OCSP
at all. But I can only drop AIA if it is not going to cause legacy
browsers to squeak about a missing OCSP distribution point.

If there are browsers that give appropriate treatment to short lived
certs then I am sure getting CABForum to update the BRs etc. is not
going to be hard. All I am saying here is that is not a critical path
concern.


  Short lived certs are just as easy in theory BUT they require some new
  infrastructure to do the job right. At minimum there needs to be a
  mechanism to tell the server how to get its supply of short lived
  certificates. And we haven't designed a standard for that yet or
  really discussed how to do it and so it isn't ready to press the fire
  button on.

 I disagree here. What's at stake is not the particular mechanisms of doing
 so, nor would I endorse going down the route of standardizing such
 mechanisms as you do. I would much rather see the relevant frameworks -
 Mozilla and the BRs - altered to support them, and then allow site
 operators and CAs interested in this model to work to develop the
 infrastructure and, based on real world experience, rough consensus, and
 running code, rather than idealized abstractions.

I am not interested in issuing any product until my customers can use
it. And I don't see how they can use it until the cert update process
can be automated.


  What I suggest browsers do right now is

  1) Join in the IETF discussion on the TLS/PKIX lists saying that you
  support my TLS Feature extension proposal aka MUST STAPLE.

  2) Read and comment on the proposal you have just committed to.

  3) Implement an appropriate response to a certificate that specifies a
  MUST STAPLE condition when the server does not staple. This could be
  (1) Hard Fail immediately or (2) attempt to do an OCSP lookup and hard
  fail if it does not succeed or (3) choose randomly between options 1
  and 2 so as to disincentivize CAs misusing setting the flag to force
  hard fail.

 This is something you should nail down before 1 or 2.

OK, if I have to nail it down I will pick 1.

 The correct answer is hard fail. Any other answers and we'll be back here
 again in 5 years with the same issues.

That is my preference.


  4) Implement a mechanism that regards certificates with a total
  validity interval of 72 hours or less to be valid without checking. I
  do not expect this feature to be used very soon but implementing the
  feature in the browser is probably a gating function on starting the
  server folk thinking about the best way to implement the cert update
  feature.

 And implementing it in policy is the gating function before thinking about
 implementing it in the server or the browser.

I don't see the need to gate on policy changes. What do you think
stops me issuing a 72 hour certificate today? I can't think of
anything.


  Rotating the server private key every 24 hours practically eliminates
  key compromise due to a server or hard drive being disposed

Re: OCSP and must staple

2014-05-02 Thread Phillip Hallam-Baker
OK so the state of play is that

* A new draft was submitted to make it current

* Russ Housley tells me that the transfer of the OID arc back to IANA
is almost complete

* I am waiting for comments from Brian.




On Fri, May 2, 2014 at 12:41 PM, Ben Wilson b...@digicert.com wrote:
 Does anyone have any update on the status of the must-staple OID?

 -Original Message-
 From: dev-security-policy
 [mailto:dev-security-policy-bounces+ben=digicert@lists.mozilla.org] On
 Behalf Of Brian Smith
 Sent: Thursday, April 10, 2014 5:06 PM
 To: Phillip Hallam-Baker
 Cc: dev-security-policy@lists.mozilla.org
 Subject: Re: OCSP and must staple

 On Thu, Apr 10, 2014 at 3:54 PM, Phillip Hallam-Baker
 hal...@gmail.comwrote:

 One of the problems with OCSP is the hardfail issue. Stapling reduces
 latency when a valid OCSP token is supplied but doesn't allow a server
 to hardfail if the token isn't provided as there is currently no way
 for a client to know if a token is missing because the server has been
 borked or if the server doesn't staple.

 This draft corrects the problem. It has been in IETF limbo due to the
 OID registry moving. But I now have a commitment from the AD that they
 will approve the OID assignment if there is support for this proposal
 from a browser provider:


 David Keeler was working on implementing Must-Staple in Gecko. You can point
 them to these two bugs:

 https://bugzilla.mozilla.org/show_bug.cgi?id=921907
 https://bugzilla.mozilla.org/show_bug.cgi?id=901698

 The work got stalled because we decided to fix some infrastructure issues
 (like the new mozilla::pkix cert verification library) first. Now that work
 is winding down and I think we'll be able to finish the Must-Staple
 implementation soon. Check with David.

 Cheers,
 Brian
 ___
 dev-security-policy mailing list
 dev-security-policy@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy



-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Turn on hardfail?

2014-04-24 Thread Phillip Hallam-Baker
If there was a DoS attack it would be the first and the last.

OCSP is only a DoS issue for servers that don't staple. All modern
servers can staple if configured to do so. Further it is only the
weaker CAs that don't have DoS proof OCSP service.

So if there was a DoS attack we would see a sudden upgrade to server
stapling and the OCSP service could probably be phased out after a
short time (except for feeding the cert holders with their tokens).



On Thu, Apr 24, 2014 at 12:39 AM, Daniel Micay danielmi...@gmail.com wrote:
 I'm talking about the DoS vulnerability opened up by making a few OCSP
 servers a single point of failure for *many* sites.

 It's also not great that you have to let certificate authorities know
 about your browsing habits.


 ___
 dev-security-policy mailing list
 dev-security-policy@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy




-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Revocation Policy

2014-04-10 Thread Phillip Hallam-Baker
Before we get any further into this conversation, I'll just point out
that business models are not something we can discuss in CABForum.

We can 'probably' tell you what we believe the rules to be but we
can't make any comment on what they should be either in CABForum or
here.




On Thu, Apr 10, 2014 at 10:28 AM, Rob Stradling
rob.stradl...@comodo.com wrote:
 The Mozilla CA Certificate Maintenance Policy (Version 2.2) [1] says
 (emphasis mine):

 CAs _must revoke_ Certificates that they have issued upon the occurrence of
 any of the following events:
 ...
   - the CA obtains _reasonable evidence_ that the subscriber’s private key
 (corresponding to the public key in the certificate) has been compromised or
 is _suspected of compromise_ (e.g. Debian weak keys)

 I think that's pretty clear!

 The CABForum BRs go one step further, demanding that the CA revoke _within
 24 hours_.

 AFAICT, non-payment by the Subscriber does not release the CA from this
 obligation to revoke promptly.

 Anyone disagree with my interpretation?


 [1]
 https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/maintenance/


 On 10/04/14 15:16, fhw...@gmail.com wrote:

 This an interesting issue Kaspar and I appreciate you raising it. I also
 personally appreciate you framing it in terms of trust because that's really
 what is at issue here.

 The whole idea of revocation is a gaping hole in the PKI landscape. The
 ability to say don't trust me is so poorly implemented throughout PKI as
 to be effectively non-existent.‎ If for some reason you need to revoke a
 cert, you should do so because it's the right thing to do, but the best you
 can hope for is that some anti-security person doesn't figure out a way to
 use it anyway.
 ‎
 This means that theft and other compromises of private keys remain viable
 attack vectors for those who wish to do so (government sponsored
 organizations and otherwise).‎ Private keys and the certs that go with them
 could be usable well after when people think they become invalid.

 This also means that ‎we should not be surprised to see an underground
 market appear that seeks to sell revoked certs. Given that high value
 internet destinations might have been impacted by the Heartbleed
 vulnerability this could definitely become a concern. Should such a place
 appear I would think StartCom - issued certs would easily be included for
 sale.
 ‎
 This also means that all pay to revoke policies should be viewed as
 anti-security and we need to strongly encourage they be discontinued in
 short order. If a CA wishes to continue such policies I would question their
 trustworthiness.
 ‎
 Further I think we are reaching the point where browsers have to refuse
 SSL connections when OCSP validation fails. I think it's getting harder to
 argue otherwise, but I'll let the Mozilla folks speak to that.


 -  Original Message  -
 From: Kaspar Janßen
 Sent: Thursday, April 10, 2014 4:12 AM

 On 10/04/14 10:08, Peter Eckersley wrote:

 Kaspar, suppose that Mozilla followed your suggestion and removed
 StartCom's root certificates from its trust store (or revoked them!).
 What
 would the consequences of that decision be, for the large number of
 domains
 that rely on StartCom certs?

 I hope that an appropriate policy will force authorities to reconsider
 their revocation principle. I don't want to harm someone nor I want to
 work off in any way.

 The key is that anybody should be able to shout out don't trust me
 anymore! without a fee. Isn't that part of the trustchain idea?

 I read a few times that Chrome doesn't even check if a certificate is
 revoked or not (at least not the default settings). That leads me to the
 question: Is it mandatory for a CA in mozilla's truststore to have to
 ability to revoke a certificate or is is only an optional feature
 provided by some CAs?
 ___
 dev-security-policy mailing list
 dev-security-policy@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy


 --
 Rob Stradling
 Senior Research  Development Scientist
 COMODO - Creating Trust Online


 ___
 dev-security-policy mailing list
 dev-security-policy@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy



-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Which SHA-2 algorithm should be used?

2014-01-08 Thread Phillip Hallam-Baker
On Wed, Jan 8, 2014 at 8:34 PM, Peter Gutmann pgut...@cs.auckland.ac.nzwrote:

 Man Ho (Certizen) ma...@certizen.com writes:

 If there is no constraints on choosing SHA-256, SHA-384 or SHA-512, why
 CAs
 are so conservative and prefer SHA-256 rather than SHA-512? I think going
 directly to a higher security strength should be preferable.

 What extra security does -512 give you that -256 doesn't?  I mean actual
 security against real threats, rather than just it has a bigger number so
 it
 must be better?  What I've heard was that the extra-sized hashes were
 added
 mostly for political reasons, in the same way that AES-192 and -256 were
 added
 for political reasons (there was a perceived need to have a keys go to 10
 and a keys go to 11 form for Suite B, since government users would look
 over
 at non-suite-B crypto with keys that went to 11 and wonder why they
 couldn't
 have that too).


The main advantage is more rounds to crypto.

In PPE I use SHA-512 and truncate to 128 bits for Phingerprints.

-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Stop using SHA1 in certificates

2014-01-03 Thread Phillip Hallam-Baker
The hashclash attack requires the CA to do more than just use SHA-1. They
have to use a predictable serial number.

That is not an argument for not withdrawing SHA-1 toute haste. It is
however a reason for folk not to do the usual headless chicken thing.


Striking out SHA-1 effectively means the end of RSA1024 because every
browser that can do SHA2 can almost certainly do RSA2048.

There will probably be some niche cases that call for continuing to issue
SHA-1 certs but only the genuine niche applications will want them at all
once the browsers start rejecting them.


On Fri, Jan 3, 2014 at 1:15 PM, Kurt Roeckx k...@roeckx.be wrote:

 Hi,

 Microsoft has proposed to stop issueing new certificates using
 SHA1 by 2016 in certificates.
 (
 http://blogs.technet.com/b/pki/archive/2013/11/12/sha1-deprecation-policy.aspx
 ).

 Mozilla also has a bug that even suggest to stop accepting some
 new certificates in 3 months and stop accepting any in 2017.
 https://bugzilla.mozilla.org/show_bug.cgi?id=942515

 But it's unclear if this is really a policy or just what some
 people think should happen.

 This seems to also recently have been discussed in the CA/Browser
 forum, but I have a feeling not everybody sees the need for this.
 https://cabforum.org/2013/12/19/2013-12-19-minutes/

 I want to point out the that SHA1 is broken for what it is used in
 certificates.  SHA1 should have a collision resistance of about
 2^80 but the best known attack reduces this to about 2^60.  In
 2012 it costs about 3M USD to break SHA-1, in 2015 this will only be
 about 700K USD.  See
 https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html

 With a collision it's possible to create a rogue CA.  See:
 http://www.win.tue.nl/hashclash/rogue-ca/

 This is only based on what is the best know attack currently
 publicly known.  There might be other attacks that we don't
 know about yet even further reducing the cost, specialised
 hardware and so on.

 This is just waiting to either happen or until someone finds out
 that it did happen.

 I would like to encourage everybody to start using SHA2 in
 certificates as soon as possible, since that's clearly the
 weakest part of the whole chain.

 This is more important that stopping to use 1024 RSA keys since
 they still have a complexity of 2^80.  But you really should
 also stop using that.

 Can someone please try to convince the CAB forum about the need
 for this?


 Kurt

 ___
 dev-security-policy mailing list
 dev-security-policy@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy




-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Exceptions to 1024-bit cert revocation requirement

2013-12-23 Thread Phillip Hallam-Baker
On Mon, Dec 23, 2013 at 8:54 AM, Rob Stradling rob.stradl...@comodo.comwrote:

 On 21/12/13 22:57, Phillip Hallam-Baker wrote:

 I thought that what we were trying to do here is break a deadlock
 where Cas wait for browsers and vice versa.

 I have no trouble telling a customer with a 15 year 512 bit cert that
 they need to change for a new one if they want it to work for ssl with
 the browsers


 Indeed.  Everyone agrees.


  Revoking it without their consent is a problem though.


 Indeed.  The subject of this thread is misleading.  Kathleen's last post
 clearly confirmed...

 Rob: Will CAs need to revoke all unexpired 1024-bit certs by the cut-off
 date?
 Kathleen: No.


It would be good if the sequence of operations to follow was documented for
future reference.

One of the problems that we have had in the industry is people assuming the
decision lies with another party. When I was with my last employer I had to
keep telling people not to follow our lead in choice of crypto because we
are forced to follow rather than lead the market. A CA can't introduce a
new crypto algorithm without the browsers implementing it five years,
preferably a decade earlier.

-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Exceptions to 1024-bit cert revocation requirement

2013-12-21 Thread Phillip Hallam-Baker
I thought that what we were trying to do here is break a deadlock
where Cas wait for browsers and vice versa.

I have no trouble telling a customer with a 15 year 512 bit cert that
they need to change for a new one if they want it to work for ssl with
the browsers

Revoking it without their consent is a problem though.


Sent from my difference engine


 On Dec 21, 2013, at 5:23 PM, Kathleen Wilson kwil...@mozilla.com wrote:

 On 12/20/13 11:45 AM, Rob Stradling wrote:
 To me, cert revocation means replying revoked via OCSP for that
 cert's serial number, and also adding that cert's serial number to the CRL.

 I understand that new versions of browsers will stop accepting 1024-bit
 certs and that site operators will naturally stop using 1024-bit certs.
  But neither stopping using nor stopping accepting are the same thing
 as revocation.

 My question is simple: Will CAs need to revoke all unexpired 1024-bit
 certs by the cut-off date?

 If Yes, where is this requirement written?

 If No, please simply reply No.

 No.
 To my knowledge there is not a written requirement for CAs to revoke all 
 unexpired 1024-bit certs by a cut-off date.

 But everyone should keep the following in mind...

 https://wiki.mozilla.org/CA:MD5and1024
 All end-entity certificates with RSA key size smaller than 2048 bits must 
 expire by the end of 2013.
 Under no circumstances should any party expect continued support for RSA key 
 size smaller than 2048 bits past December 31, 2013. This date could get moved 
 up substantially if necessary to keep our users safe. We recommend all 
 parties involved in secure transactions on the web move away from 1024-bit 
 moduli as soon as possible.

 Some long-lived certs were issued before the statement was made and 
 communicated.

 Some CAs have needed to re-issue 1024-bit certs that are valid beyond 2013 in 
 order for their customers to maintain operation while transitioning to new 
 software and hardware that will support 2048-bit certs. (I am OK with this)

 At this point in time, I think the 1024-bit certs will work in Mozilla 
 products until the April 2014 time frame. But, as per 
 https://wiki.mozilla.org/CA:MD5and1024, Mozilla will take these actions 
 earlier and at its sole discretion if necessary to keep our users safe.

 Kathleen



 ___
 dev-security-policy mailing list
 dev-security-policy@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Revoking Trust in one ANSSI Certificate

2013-12-10 Thread Phillip Hallam-Baker
On Mon, Dec 9, 2013 at 2:17 PM, Jan Schejbal jan.schejbal_n...@gmx.dewrote:


 I would really love to see the explanation how someone accidentally
 issues and deploys a MitM Sub-CA...


I think it will turn out to be essentially the same reason that Microsoft
got burned with the Flame attack.

Just because an organization has PKI expertise does not mean that it is
evenly shared in the organization or that everyone understands what the
constraints are.

The organization does not have managing crypto as its primary goal so the
processes that manage the CA do not include awareness of current crypto
affairs as a requirement.

I have similar concerns about DANE. The expectations that are placed on the
registries and registrars are quite interesting.

-- 
Website: http://hallambaker.com/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy