Re: Concerns with Let's Encrpyt repeated issuing for known fraudulent sites

2020-08-13 Thread Paul Walsh via dev-security-policy
I agree Eric. I apologize for those words, they’re beneath me and everyone else 
who strives for civil debate. It’s a terrible paragraph of text.

- Paul



> On Aug 13, 2020, at 4:09 PM, Eric Mill  wrote:
> 
> On Thu, Aug 13, 2020 at 10:20 AM Paul Walsh via dev-security-policy 
>  <mailto:dev-security-policy@lists.mozilla.org>> wrote:
> "Every domain should be allowed to have a certificate ***regardless of 
> intent***.”
> 
> They are the most outrageously irresponsible words that I’ve heard in my 
> career on the web since 1996 when I was at AOL, and sadly, I’ve heard them 
> more than once. I just can’t get my head around it. To me, those words are 
> akin to someone saying that masks, Bill Gates, 5G and vaccinations are all 
> dangerous - totally stupid and not in the best interest of society. 
> 
> Calling someone else's contributions on the list "totally stupid" is to me a 
> pretty clear violation of the code of conduct of this list. Maybe you didn't 
> mean to do exactly that, but given that you also called them "outrageously 
> irresponsible" and made a direct comparison to 5G/vaccination conspiracy 
> theories, certainly the totality of your note was unnecessarily harsh.
> 
> 
>  
> 
> - Paul
> 
> 
> 
> > On Aug 13, 2020, at 1:37 AM, Burton mailto:j...@0.me.uk>> 
> > wrote:
> > 
> > Let's Encrypt hasn't done anything wrong here.
> > Let's Encrypt has issued the certificate according to the BR requirements 
> > and their own policies.
> > 
> > Every domain should be allowed to have a certificate regardless of intent. 
> > CAs must not be allowed to act as judges.
> > 
> > Remember, all server certificates have to go into CT log and therefore 
> > easily findable. That can be useful in many situations.  
> > 
> > On Thu, Aug 13, 2020 at 9:15 AM Matthew Hardeman via dev-security-policy 
> >  > <mailto:dev-security-policy@lists.mozilla.org> 
> > <mailto:dev-security-policy@lists.mozilla.org 
> > <mailto:dev-security-policy@lists.mozilla.org>>> wrote:
> > It’s actually really simple.
> > 
> > You end up in a position of editorializing.  If you will not provide
> > service for abuse, everyone with a gripe constantly tries to redefine abuse.
> > 
> > 
> > Additionally, this is why positive security indicators are clearly on the
> > way out.  In the not too distant future all sites will be https, so all
> > will require certs.
> > 
> > CAs are not meant to certify that the party you’re communicating with isn’t
> > a monster.  Just that if you are visiting siterunbymonster.com 
> > <http://siterunbymonster.com/> <http://siterunbymonster.com/ 
> > <http://siterunbymonster.com/>> that you
> > really are speaking with siterunbymonster.com 
> > <http://siterunbymonster.com/> <http://siterunbymonster.com/ 
> > <http://siterunbymonster.com/>>.
> > 
> > On Wednesday, August 12, 2020, Paul Walsh via dev-security-policy <
> > dev-security-policy@lists.mozilla.org 
> > <mailto:dev-security-policy@lists.mozilla.org> 
> > <mailto:dev-security-policy@lists.mozilla.org 
> > <mailto:dev-security-policy@lists.mozilla.org>>> wrote:
> > 
> > > [snip]
> > >
> > > >> So the question now is what the community intends to do to retain trust
> > > >> in a certificate issuer with such an obvious malpractise enabling
> > > >> phishing sites?
> > > >
> > > > TLS is the wrong layer to address phishing at, and this issue has
> > > already been discussed extensively on this list. This domain is already
> > > blocked by Google Safe Browsing, which is the correct layer (the User
> > > Agent) to deal with phishing at. I'd suggest reading through these posts
> > > before continuing so that we don't waste our time rehashing old arguments:
> > > https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing 
> > > <https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing>
> > >  
> > > <https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing
> > >  
> > > <https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing>>
> > >
> > >
> > > [PW]  I’m going to ignore technology and phishing here, it’s irrelevant.
> > > What we’re talking about is a company’s anti-abuse policies and how 
> > > they’re
> > > implemented and enforced. It doesn’t matter if they’re selling 
> > > certificates
&g

Re: Concerns with Let's Encrpyt repeated issuing for known fraudulent sites

2020-08-13 Thread Paul Walsh via dev-security-policy
Exactly what I thought - you’re either unable to answer the question honestly, 
or you simply do not care about the consequences that arise from abuse. 


> On Aug 13, 2020, at 11:19 AM, Burton  wrote:
> 
> I'm not going to answer the question because it's not relevant to discussion.
> 
> On Thu, Aug 13, 2020 at 6:57 PM Paul Walsh  <mailto:p...@metacert.com>> wrote:
> Let me try this. Let’s say a report of child abuse is put forward to a 
> hosting provider, should they ignore it because they “are not the police”? 
> Should companies like Twitter and Facebook do nothing to reduce the risk of 
> bullying, misinformation and other bad things? It’s ok to say you think they 
> should do nothing - but is that in the best interest of internet security and 
> for society? 
> 
> Again, I’m talking about moral obligation, not the law or even standards or 
> best practices. Why would any company not want to reduce the risk of abuse 
> for illegal intent? Just because you don’t have to do something, doesn’t mean 
> you shouldn’t. You can walk past a child being kicked in the head by a 
> strange man if you want, but it’s probably not the right thing to do. You can 
> call the police but by then they could be dead. 
> 
> Where’s your sense of doing the right thing?
> 
> 
> 
>> On Aug 13, 2020, at 10:42 AM, Burton mailto:j...@0.me.uk>> 
>> wrote:
>> 
>> I stand by the comments I made earlier and it's the correct terminology. A 
>> domain should have a certificate regardless of intent by the user. CAs are 
>> not the police and shouldn't act as one. CAs do have to follow policies if 
>> the certificate is used in illegal activities, misissued, etc but no CA 
>> shouldn't be refusing to issue a certificate for a domain unless for certain 
>> reasons.
>> 
>> We are talking about DV certificates because that is what Let's Encrypt 
>> issues. 
>> 
>> On Thu, Aug 13, 2020 at 6:20 PM Paul Walsh > <mailto:p...@metacert.com>> wrote:
>> "Every domain should be allowed to have a certificate ***regardless of 
>> intent***.”
>> 
>> They are the most outrageously irresponsible words that I’ve heard in my 
>> career on the web since 1996 when I was at AOL, and sadly, I’ve heard them 
>> more than once. I just can’t get my head around it. To me, those words are 
>> akin to someone saying that masks, Bill Gates, 5G and vaccinations are all 
>> dangerous - totally stupid and not in the best interest of society. 
>> 
>> - Paul
>> 
>> 
>> 
>>> On Aug 13, 2020, at 1:37 AM, Burton mailto:j...@0.me.uk>> 
>>> wrote:
>>> 
>>> Let's Encrypt hasn't done anything wrong here.
>>> Let's Encrypt has issued the certificate according to the BR requirements 
>>> and their own policies.
>>> 
>>> Every domain should be allowed to have a certificate regardless of intent. 
>>> CAs must not be allowed to act as judges.
>>> 
>>> Remember, all server certificates have to go into CT log and therefore 
>>> easily findable. That can be useful in many situations.  
>>> 
>>> On Thu, Aug 13, 2020 at 9:15 AM Matthew Hardeman via dev-security-policy 
>>> >> <mailto:dev-security-policy@lists.mozilla.org>> wrote:
>>> It’s actually really simple.
>>> 
>>> You end up in a position of editorializing.  If you will not provide
>>> service for abuse, everyone with a gripe constantly tries to redefine abuse.
>>> 
>>> 
>>> Additionally, this is why positive security indicators are clearly on the
>>> way out.  In the not too distant future all sites will be https, so all
>>> will require certs.
>>> 
>>> CAs are not meant to certify that the party you’re communicating with isn’t
>>> a monster.  Just that if you are visiting siterunbymonster.com 
>>> <http://siterunbymonster.com/> that you
>>> really are speaking with siterunbymonster.com 
>>> <http://siterunbymonster.com/>.
>>> 
>>> On Wednesday, August 12, 2020, Paul Walsh via dev-security-policy <
>>> dev-security-policy@lists.mozilla.org 
>>> <mailto:dev-security-policy@lists.mozilla.org>> wrote:
>>> 
>>> > [snip]
>>> >
>>> > >> So the question now is what the community intends to do to retain trust
>>> > >> in a certificate issuer with such an obvious malpractise enabling
>>> > >> phishing sites?
>>> > >
>>> > > TLS is the wrong layer to address phishing at, and this issue has
>>> > already been

Re: Concerns with Let's Encrpyt repeated issuing for known fraudulent sites

2020-08-13 Thread Paul Walsh via dev-security-policy

> On Aug 13, 2020, at 11:04 AM, Tobias S. Josefowitz via dev-security-policy 
>  wrote:
> 
> On Thu, Aug 13, 2020 at 7:20 PM Paul Walsh via dev-security-policy
>  wrote:
>> 
>> "Every domain should be allowed to have a certificate ***regardless of 
>> intent***.”
>> 
>> They are the most outrageously irresponsible words that I’ve heard in my 
>> career on the web since 1996 when I was at AOL, and sadly, I’ve heard them 
>> more than once. I just can’t get my head around it. To me, those words are 
>> akin to someone saying that masks, Bill Gates, 5G and vaccinations are all 
>> dangerous - totally stupid and not in the best interest of society.
> 
> So in your opinion, what is wrong with every domain being allowed to
> have a certificate? What are your opinions on every domain being
> allowed TCP connections, IP addresses, its domain itself, and
> electricity? Is the certificate somehow standing out in your opinion?
> Why should it? If it was so easy for CAs to detect problematic
> domains, why isn't it for the domain registries/registrars? Why isn't
> the domain itself the problem but somehow the certificate is?

[PW] Good questions. Perhaps you could answer mine first? That is, why would a 
company not want to reduce the risk of their service being abused? Asking me to 
explain why they should, seems counterproductive. It’s like asking me why I 
should stop a man from kicking a child in the head. Answer = it’s the right 
thing to do, even if I don’t have to.

“Why isn’t it for the domain registries/registrars”. They should all try to 
reduce the risk of malicious domains being registered, and/or react when 
someone complains about abuse.

When a domain is proven to be used for malicious activity it’s generally taken 
down - at least by companies that play fair. Some types of TLDs are even 
regulated to the point where you can’t buy a domain unless you have your 
identity verified. 

By deflecting the conversation to other stakeholders you’re participating in 
“whataboutisim”. Let’s stick to why any company should not try to reduce the 
risk of abuse. 

- Paul


> 
> Tobi
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Concerns with Let's Encrpyt repeated issuing for known fraudulent sites

2020-08-13 Thread Paul Walsh via dev-security-policy
Let me try this. Let’s say a report of child abuse is put forward to a hosting 
provider, should they ignore it because they “are not the police”? Should 
companies like Twitter and Facebook do nothing to reduce the risk of bullying, 
misinformation and other bad things? It’s ok to say you think they should do 
nothing - but is that in the best interest of internet security and for 
society? 

Again, I’m talking about moral obligation, not the law or even standards or 
best practices. Why would any company not want to reduce the risk of abuse for 
illegal intent? Just because you don’t have to do something, doesn’t mean you 
shouldn’t. You can walk past a child being kicked in the head by a strange man 
if you want, but it’s probably not the right thing to do. You can call the 
police but by then they could be dead. 

Where’s your sense of doing the right thing?



> On Aug 13, 2020, at 10:42 AM, Burton  wrote:
> 
> I stand by the comments I made earlier and it's the correct terminology. A 
> domain should have a certificate regardless of intent by the user. CAs are 
> not the police and shouldn't act as one. CAs do have to follow policies if 
> the certificate is used in illegal activities, misissued, etc but no CA 
> shouldn't be refusing to issue a certificate for a domain unless for certain 
> reasons.
> 
> We are talking about DV certificates because that is what Let's Encrypt 
> issues. 
> 
> On Thu, Aug 13, 2020 at 6:20 PM Paul Walsh  <mailto:p...@metacert.com>> wrote:
> "Every domain should be allowed to have a certificate ***regardless of 
> intent***.”
> 
> They are the most outrageously irresponsible words that I’ve heard in my 
> career on the web since 1996 when I was at AOL, and sadly, I’ve heard them 
> more than once. I just can’t get my head around it. To me, those words are 
> akin to someone saying that masks, Bill Gates, 5G and vaccinations are all 
> dangerous - totally stupid and not in the best interest of society. 
> 
> - Paul
> 
> 
> 
>> On Aug 13, 2020, at 1:37 AM, Burton mailto:j...@0.me.uk>> 
>> wrote:
>> 
>> Let's Encrypt hasn't done anything wrong here.
>> Let's Encrypt has issued the certificate according to the BR requirements 
>> and their own policies.
>> 
>> Every domain should be allowed to have a certificate regardless of intent. 
>> CAs must not be allowed to act as judges.
>> 
>> Remember, all server certificates have to go into CT log and therefore 
>> easily findable. That can be useful in many situations.  
>> 
>> On Thu, Aug 13, 2020 at 9:15 AM Matthew Hardeman via dev-security-policy 
>> > <mailto:dev-security-policy@lists.mozilla.org>> wrote:
>> It’s actually really simple.
>> 
>> You end up in a position of editorializing.  If you will not provide
>> service for abuse, everyone with a gripe constantly tries to redefine abuse.
>> 
>> 
>> Additionally, this is why positive security indicators are clearly on the
>> way out.  In the not too distant future all sites will be https, so all
>> will require certs.
>> 
>> CAs are not meant to certify that the party you’re communicating with isn’t
>> a monster.  Just that if you are visiting siterunbymonster.com 
>> <http://siterunbymonster.com/> that you
>> really are speaking with siterunbymonster.com <http://siterunbymonster.com/>.
>> 
>> On Wednesday, August 12, 2020, Paul Walsh via dev-security-policy <
>> dev-security-policy@lists.mozilla.org 
>> <mailto:dev-security-policy@lists.mozilla.org>> wrote:
>> 
>> > [snip]
>> >
>> > >> So the question now is what the community intends to do to retain trust
>> > >> in a certificate issuer with such an obvious malpractise enabling
>> > >> phishing sites?
>> > >
>> > > TLS is the wrong layer to address phishing at, and this issue has
>> > already been discussed extensively on this list. This domain is already
>> > blocked by Google Safe Browsing, which is the correct layer (the User
>> > Agent) to deal with phishing at. I'd suggest reading through these posts
>> > before continuing so that we don't waste our time rehashing old arguments:
>> > https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing 
>> > <https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing>
>> >
>> >
>> > [PW]  I’m going to ignore technology and phishing here, it’s irrelevant.
>> > What we’re talking about is a company’s anti-abuse policies and how they’re
>> > implemented and enforced. It doesn’t matter if they’re selling certificates
>> >

Re: Concerns with Let's Encrpyt repeated issuing for known fraudulent sites

2020-08-13 Thread Paul Walsh via dev-security-policy
You’re way off topic.. I purposely didn’t bring up indicators or phishing or 
certifying anything. Those things have absolutely nothing to do with my 
message. You’re joining dots that don’t exist in my conversation. Rather than 
do that, refer only to the words I write - not what I might be thinking or 
trying to say.

Saying that companies shouldn’t try to reduce abuse of any kind because some 
people will want to redefine what abuse isn’t logical in my opinion. 

Please answer this to help me understand your perspective - should CAs ignore 
all other instances of abuse? If your answer is no, it would help a great deal, 
if you can explain why some kinds of abuse are good and some kinds of abuse are 
ok and who gets to decide either way?

- Paul

> On Aug 13, 2020, at 1:15 AM, Matthew Hardeman  wrote:
> 
> It’s actually really simple.
> 
> You end up in a position of editorializing.  If you will not provide service 
> for abuse, everyone with a gripe constantly tries to redefine abuse.
> 
> 
> Additionally, this is why positive security indicators are clearly on the way 
> out.  In the not too distant future all sites will be https, so all will 
> require certs.
> 
> CAs are not meant to certify that the party you’re communicating with isn’t a 
> monster.  Just that if you are visiting siterunbymonster.com 
> <http://siterunbymonster.com/> that you really are speaking with 
> siterunbymonster.com <http://siterunbymonster.com/>.
> 
> On Wednesday, August 12, 2020, Paul Walsh via dev-security-policy 
>  <mailto:dev-security-policy@lists.mozilla.org>> wrote:
> [snip]
> 
> >> So the question now is what the community intends to do to retain trust 
> >> in a certificate issuer with such an obvious malpractise enabling 
> >> phishing sites?
> > 
> > TLS is the wrong layer to address phishing at, and this issue has already 
> > been discussed extensively on this list. This domain is already blocked by 
> > Google Safe Browsing, which is the correct layer (the User Agent) to deal 
> > with phishing at. I'd suggest reading through these posts before continuing 
> > so that we don't waste our time rehashing old arguments: 
> > https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing 
> > <https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing>
> 
> 
> [PW]  I’m going to ignore technology and phishing here, it’s irrelevant. What 
> we’re talking about is a company’s anti-abuse policies and how they’re 
> implemented and enforced. It doesn’t matter if they’re selling certificates 
> or apples.
> 
> Companies have a moral obligation (often legal) to **try** to reduce the risk 
> of their technology/service being abused by people with ill intent. If they 
> try and fail, that’s ok. I don’t think a reasonable person can disagree with 
> that. 
> 
> If Let’s Encrypt, Entrust Datacard, GoDaddy, or whoever, has been informed 
> that bad people are abusing their service, why wouldn’t they want to stop 
> that from happening? And why would anyone say that it’s ok for any service to 
> be abused? I don’t understand. 
> 
> - Paul
> 
> 
> 
> > 
> > Jonathan
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org 
> > <mailto:dev-security-policy@lists.mozilla.org>
> > https://lists.mozilla.org/listinfo/dev-security-policy 
> > <https://lists.mozilla.org/listinfo/dev-security-policy>
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org 
> <mailto:dev-security-policy@lists.mozilla.org>
> https://lists.mozilla.org/listinfo/dev-security-policy 
> <https://lists.mozilla.org/listinfo/dev-security-policy>

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Concerns with Let's Encrpyt repeated issuing for known fraudulent sites

2020-08-13 Thread Paul Walsh via dev-security-policy
"Every domain should be allowed to have a certificate ***regardless of 
intent***.”

They are the most outrageously irresponsible words that I’ve heard in my career 
on the web since 1996 when I was at AOL, and sadly, I’ve heard them more than 
once. I just can’t get my head around it. To me, those words are akin to 
someone saying that masks, Bill Gates, 5G and vaccinations are all dangerous - 
totally stupid and not in the best interest of society. 

- Paul



> On Aug 13, 2020, at 1:37 AM, Burton  wrote:
> 
> Let's Encrypt hasn't done anything wrong here.
> Let's Encrypt has issued the certificate according to the BR requirements and 
> their own policies.
> 
> Every domain should be allowed to have a certificate regardless of intent. 
> CAs must not be allowed to act as judges.
> 
> Remember, all server certificates have to go into CT log and therefore easily 
> findable. That can be useful in many situations.  
> 
> On Thu, Aug 13, 2020 at 9:15 AM Matthew Hardeman via dev-security-policy 
>  <mailto:dev-security-policy@lists.mozilla.org>> wrote:
> It’s actually really simple.
> 
> You end up in a position of editorializing.  If you will not provide
> service for abuse, everyone with a gripe constantly tries to redefine abuse.
> 
> 
> Additionally, this is why positive security indicators are clearly on the
> way out.  In the not too distant future all sites will be https, so all
> will require certs.
> 
> CAs are not meant to certify that the party you’re communicating with isn’t
> a monster.  Just that if you are visiting siterunbymonster.com 
> <http://siterunbymonster.com/> that you
> really are speaking with siterunbymonster.com <http://siterunbymonster.com/>.
> 
> On Wednesday, August 12, 2020, Paul Walsh via dev-security-policy <
> dev-security-policy@lists.mozilla.org 
> <mailto:dev-security-policy@lists.mozilla.org>> wrote:
> 
> > [snip]
> >
> > >> So the question now is what the community intends to do to retain trust
> > >> in a certificate issuer with such an obvious malpractise enabling
> > >> phishing sites?
> > >
> > > TLS is the wrong layer to address phishing at, and this issue has
> > already been discussed extensively on this list. This domain is already
> > blocked by Google Safe Browsing, which is the correct layer (the User
> > Agent) to deal with phishing at. I'd suggest reading through these posts
> > before continuing so that we don't waste our time rehashing old arguments:
> > https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing 
> > <https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing>
> >
> >
> > [PW]  I’m going to ignore technology and phishing here, it’s irrelevant.
> > What we’re talking about is a company’s anti-abuse policies and how they’re
> > implemented and enforced. It doesn’t matter if they’re selling certificates
> > or apples.
> >
> > Companies have a moral obligation (often legal) to **try** to reduce the
> > risk of their technology/service being abused by people with ill intent. If
> > they try and fail, that’s ok. I don’t think a reasonable person can
> > disagree with that.
> >
> > If Let’s Encrypt, Entrust Datacard, GoDaddy, or whoever, has been informed
> > that bad people are abusing their service, why wouldn’t they want to stop
> > that from happening? And why would anyone say that it’s ok for any service
> > to be abused? I don’t understand.
> >
> > - Paul
> >
> >
> >
> > >
> > > Jonathan
> > > ___
> > > dev-security-policy mailing list
> > > dev-security-policy@lists.mozilla.org 
> > > <mailto:dev-security-policy@lists.mozilla.org>
> > > https://lists.mozilla.org/listinfo/dev-security-policy 
> > > <https://lists.mozilla.org/listinfo/dev-security-policy>
> >
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org 
> > <mailto:dev-security-policy@lists.mozilla.org>
> > https://lists.mozilla.org/listinfo/dev-security-policy 
> > <https://lists.mozilla.org/listinfo/dev-security-policy>
> >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org 
> <mailto:dev-security-policy@lists.mozilla.org>
> https://lists.mozilla.org/listinfo/dev-security-policy 
> <https://lists.mozilla.org/listinfo/dev-security-policy>

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Concerns with Let's Encrpyt repeated issuing for known fraudulent sites

2020-08-12 Thread Paul Walsh via dev-security-policy
[snip]

>> So the question now is what the community intends to do to retain trust 
>> in a certificate issuer with such an obvious malpractise enabling 
>> phishing sites?
> 
> TLS is the wrong layer to address phishing at, and this issue has already 
> been discussed extensively on this list. This domain is already blocked by 
> Google Safe Browsing, which is the correct layer (the User Agent) to deal 
> with phishing at. I'd suggest reading through these posts before continuing 
> so that we don't waste our time rehashing old arguments: 
> https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing


[PW]  I’m going to ignore technology and phishing here, it’s irrelevant. What 
we’re talking about is a company’s anti-abuse policies and how they’re 
implemented and enforced. It doesn’t matter if they’re selling certificates or 
apples.

Companies have a moral obligation (often legal) to **try** to reduce the risk 
of their technology/service being abused by people with ill intent. If they try 
and fail, that’s ok. I don’t think a reasonable person can disagree with that. 

If Let’s Encrypt, Entrust Datacard, GoDaddy, or whoever, has been informed that 
bad people are abusing their service, why wouldn’t they want to stop that from 
happening? And why would anyone say that it’s ok for any service to be abused? 
I don’t understand. 

- Paul



> 
> Jonathan
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New Blog Post on 398-Day Certificate Lifetimes

2020-07-09 Thread Paul Walsh via dev-security-policy
text? It’s a very strange sentence that could 
have different meanings. I suggest an edit to be more clear and concise. 

Now that I have proven beyond a shadow of a doubt that we are talking about 
phishing, feel free to debate the merits of my points raised in my original 
email. 

Regards,
Paul





> On Jul 9, 2020, at 3:35 PM, Ryan Sleevi  wrote:
> 
> I’m not sure how that answered my question? Nothing about the post seems to 
> be about phishing, which is not surprising, since certificates have nothing 
> to do with phishing, but your response just talks more about phishing.
> 
> It seems you may be misinterpreting “security risks” as “phishing“, since you 
> state they’re interchangeable. Just like Firefox’s sandbox isn’t about 
> phishing, nor is the same-origin policy about phishing, nor is Rust’s memory 
> safety about phishing, it seems like certificate security is also completely 
> unrelated to phishing, and the “security risks” unrelated to phishing.
> 
> On Thu, Jul 9, 2020 at 2:48 PM Paul Walsh  <mailto:p...@metacert.com>> wrote:
> Good question. And I can see why you might ask that question.
> 
> The community lead of PhishTank mistakenly said that submissions should only 
> be made for URLs that are used to steal' credentials. This helps to 
> demonstrate a misconception. While this might have been ok in the past, it’s 
> not today.
> 
> Phishing is a social engineering technique, used to trick consumers into 
> trusting URLs / websites so they can do bad things - including but not 
> limited to, man-in-the-middle attacks. Mozilla references this attack vector 
> as one of the main reasons for wanting to reduce the life of a cert. They 
> didn’t call it “phishing” but that’s precisely what it is.
> 
> We can remove all of my references to “phishing” and replace it with 
> “security risks” or “social engineering” if it makes this conversation a 
> little easier.
> 
> And, according to every single security company in the world that focuses on 
> this problem, certificates are absolutely used by bad actors - if only to 
> make sure they don’t see a “Not Secure” warning. 
> 
> I’m not talking about EV or identity related info here as it’s not related. 
> I’m talking about the risk of a bad actor caring to use a cert that was 
> issued to someone else when all they have to do is get a new one for free. 
> 
> I don’t see the risk that some people see. Hoping to be corrected because the 
> alternative is that browsers are about to make life harder and more expensive 
> for website owners with little to no upside - outside that of a researchers 
> lab. 
> 
> Warmest regards,
> Paul
> 
> 
>> On Jul 9, 2020, at 11:26 AM, Ryan Sleevi > <mailto:r...@sleevi.com>> wrote:
>> 
>> 
>> 
>> On Thu, Jul 9, 2020 at 1:04 PM Paul Walsh via dev-security-policy 
>> > <mailto:dev-security-policy@lists.mozilla.org>> wrote:
>> 
>> According to Google, spear phishing
>> 
>> I didn't see phishing mentioned in Mozilla's post, which is unsurprising, 
>> since certificates have nothing to do with phishing. Did I overlook 
>> something saying it was about phishing?
>> 
>> It seems reasonable to read it as it was written, which doesn't mention 
>> phishing, which isn't surprising, because certificates have never been able 
>> to address phishing.
> 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates possibly misissued to historical UK counties

2020-07-09 Thread Paul Walsh via dev-security-policy
As someone who worked in Richmond and lived in Surrey while registering more 
than one UK company, I can testify to this. I’d only add that the post code is 
what’s most helpful when establishing a location. 



> On Jul 9, 2020, at 5:24 PM, Nick Lamb via dev-security-policy 
>  wrote:
> 
> On Thu, 9 Jul 2020 00:33:35 -0700 (PDT)
> David Shah via dev-security-policy
>  wrote:
> 
>> Richmond in the UK has not been part of Surrey from an administrative
>> point of view since 1965. It is now part of Greater London.
> 
> If a model of how places work requires that the UK be split into
> counties then the model is defective because that's not how it has
> worked for decades.
> 
> However, for the purpose of OV/EV certificates I don't think this is a
> real concern unless the address is actively misleading rather than
> merely in some technical sense a "wrong" address. Letters which are
> otherwise correctly addressed but imply Richmond is in Surrey will be
> anyhow delivered without delay, and the address isn't made difficult to
> find in person by this "mistake".
> 
> The subscriber is uncontroversially identified, and most likely any
> weird glitches like "Richmond, Surrey" are a result of an external
> database that isn't the responsibility of a CA.
> 
> Nick.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New Blog Post on 398-Day Certificate Lifetimes

2020-07-09 Thread Paul Walsh via dev-security-policy
Good question. And I can see why you might ask that question.

The community lead of PhishTank mistakenly said that submissions should only be 
made for URLs that are used to steal' credentials. This helps to demonstrate a 
misconception. While this might have been ok in the past, it’s not today.

Phishing is a social engineering technique, used to trick consumers into 
trusting URLs / websites so they can do bad things - including but not limited 
to, man-in-the-middle attacks. Mozilla references this attack vector as one of 
the main reasons for wanting to reduce the life of a cert. They didn’t call it 
“phishing” but that’s precisely what it is.

We can remove all of my references to “phishing” and replace it with “security 
risks” or “social engineering” if it makes this conversation a little easier.

And, according to every single security company in the world that focuses on 
this problem, certificates are absolutely used by bad actors - if only to make 
sure they don’t see a “Not Secure” warning. 

I’m not talking about EV or identity related info here as it’s not related. I’m 
talking about the risk of a bad actor caring to use a cert that was issued to 
someone else when all they have to do is get a new one for free. 

I don’t see the risk that some people see. Hoping to be corrected because the 
alternative is that browsers are about to make life harder and more expensive 
for website owners with little to no upside - outside that of a researchers 
lab. 

Warmest regards,
Paul


> On Jul 9, 2020, at 11:26 AM, Ryan Sleevi  wrote:
> 
> 
> 
> On Thu, Jul 9, 2020 at 1:04 PM Paul Walsh via dev-security-policy 
>  <mailto:dev-security-policy@lists.mozilla.org>> wrote:
> 
> According to Google, spear phishing
> 
> I didn't see phishing mentioned in Mozilla's post, which is unsurprising, 
> since certificates have nothing to do with phishing. Did I overlook something 
> saying it was about phishing?
> 
> It seems reasonable to read it as it was written, which doesn't mention 
> phishing, which isn't surprising, because certificates have never been able 
> to address phishing.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New Blog Post on 398-Day Certificate Lifetimes

2020-07-09 Thread Paul Walsh via dev-security-policy
Ugh, some poor language/typos but I”m sure people can navigate them. Sorry 
about that. 



> On Jul 9, 2020, at 10:04 AM, Paul Walsh  wrote:
> 
> Thanks Ben, 
> 
> I’ve only had half a cup of coffee this am, so it’s possible I’m not yet 
> awake :)
> 
> I have a question about reasons 2 and 3 as they’re closely related to the 
> attack vector.
> 
> According to Google, spear phishing attacks have a shelf life of 7 minutes 
> while bulk campaigns have a shelf life of 13 hours. Even if we disbelieve 
> this data and multiple the numbers by 10, we end up with the majority of the 
> harm being done within a week. 
> 
> Also, if bad actors can automatically acquire a DV cert for any available 
> domain they please, is there actual risk of bad actors waiting for a domain 
> to expire so they can have a valid cert? And they can easily execute a 
> man-in-the-middle attack using a new cert that has a shelf life of 3 months.
> 
> All I’ve been working on for years is anti-phishing techniques, so I’m not 
> seeing all of the benefits as some others see them, but perhaps I’m missing 
> something.
> 
> I’m talking about the human element of bad actors here, because at the end of 
> the day, it’s all about them and what they will do with expired certs. 
> 
> If we were talking about EV I’d see every single benefit as described, but 
> not for DV. When I look at our phishing data, the reasons provided for 
> reducing the shelf life of DV outweighs the cost. 
> 
> There is a cost to website owners. I’d argue it’s an expensive exercise. CAs 
> stand to generate more revenue by shortening the life of a cert, so I don’t 
> know what their motivates could be to fight against this change - aside from 
> wanting to support their customers (website owners). There was no consensus 
> in the CA/Browser Forum - CAs voted against this change.
> 
> For those who think I love CAs, my company displaces the need for EV, so I’m 
> certainly not fighting on their behalf. I just don’t see the benefits as 
> browser vendors see them, and there is still no data that I can find, to help 
> me better understand the fine details of points 2 and 3.
> 
> I believe browser vendors have the right to enforce what they deem 
> appropriate. I’m simply asking for more details given that you’re engaging 
> with the community.
> 
> Thanks,
> Paul
> 
> 
> 
> 
>> On Jul 9, 2020, at 8:46 AM, Ben Wilson via dev-security-policy 
>> > > wrote:
>> 
>> All,
>> This is just to let everyone know that I posted a new Mozilla Security blog
>> post this morning. Here is the link>
>> https://blog.mozilla.org/security/2020/07/09/reducing-tls-certificate-lifespans-to-398-days/
>>  
>> 
>> As I note at the end of the blog post, we continue to seek safeguarding
>> secure browsing by working with CAs as partners, to foster open and frank
>> communication, and to be diligent in looking for ways to keep our users
>> safe.
>> Thanks,
>> Ben
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
> 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New Blog Post on 398-Day Certificate Lifetimes

2020-07-09 Thread Paul Walsh via dev-security-policy
Thanks Ben, 

I’ve only had half a cup of coffee this am, so it’s possible I’m not yet awake 
:)

I have a question about reasons 2 and 3 as they’re closely related to the 
attack vector.

According to Google, spear phishing attacks have a shelf life of 7 minutes 
while bulk campaigns have a shelf life of 13 hours. Even if we disbelieve this 
data and multiple the numbers by 10, we end up with the majority of the harm 
being done within a week. 

Also, if bad actors can automatically acquire a DV cert for any available 
domain they please, is there actual risk of bad actors waiting for a domain to 
expire so they can have a valid cert? And they can easily execute a 
man-in-the-middle attack using a new cert that has a shelf life of 3 months.

All I’ve been working on for years is anti-phishing techniques, so I’m not 
seeing all of the benefits as some others see them, but perhaps I’m missing 
something.

I’m talking about the human element of bad actors here, because at the end of 
the day, it’s all about them and what they will do with expired certs. 

If we were talking about EV I’d see every single benefit as described, but not 
for DV. When I look at our phishing data, the reasons provided for reducing the 
shelf life of DV outweighs the cost. 

There is a cost to website owners. I’d argue it’s an expensive exercise. CAs 
stand to generate more revenue by shortening the life of a cert, so I don’t 
know what their motivates could be to fight against this change - aside from 
wanting to support their customers (website owners). There was no consensus in 
the CA/Browser Forum - CAs voted against this change.

For those who think I love CAs, my company displaces the need for EV, so I’m 
certainly not fighting on their behalf. I just don’t see the benefits as 
browser vendors see them, and there is still no data that I can find, to help 
me better understand the fine details of points 2 and 3.

I believe browser vendors have the right to enforce what they deem appropriate. 
I’m simply asking for more details given that you’re engaging with the 
community.

Thanks,
Paul




> On Jul 9, 2020, at 8:46 AM, Ben Wilson via dev-security-policy 
>  wrote:
> 
> All,
> This is just to let everyone know that I posted a new Mozilla Security blog
> post this morning. Here is the link>
> https://blog.mozilla.org/security/2020/07/09/reducing-tls-certificate-lifespans-to-398-days/
> As I note at the end of the blog post, we continue to seek safeguarding
> secure browsing by working with CAs as partners, to foster open and frank
> communication, and to be diligent in looking for ways to keep our users
> safe.
> Thanks,
> Ben
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use of information collected from problem reporting addresses for marketing?

2020-06-02 Thread Paul Walsh via dev-security-policy
I dislike being added to lists as much as the next person. There are numerous 
reasons for what might have happened. Had you setup an address for the purpose 
of contacting them, or any other company, you’d know for sure. 

My personal approach would be to ask them before emailing the list. And I’m not 
pointing the finger because you decided to email the list :))

I’ve received some unsolicited emails from people here, but I’m lucky because I 
appreciated each one - but they weren’t marketing emails. 

- Paul


>> On Jun 2, 2020, at 6:38 PM, Benjamin Seidenberg via dev-security-policy 
>>  wrote:
> Greetings:
> 
> Today, I received a marketing email from one of the CAs in Mozilla's
> program (Sectigo). As far as I know, the only interactions I've ever had
> with this CA where they would have gotten my name and email address would
> be from me submitting problem reports to them (for compromised private
> keys). Therefore, I can only assume that they mined their problem report
> submissions in order to generate their marketing contact lists.
> 
> This leads to two questions:
> 
> 1.) Is anyone aware of any policies that speak to this practice? I'm not
> aware of anything in the BRs or Mozilla policy that speak to this, but
> there are many other standards, documents, audit regimes, etc., which are
> incorporated by reference that I am not familiar with, and so it's possible
> one of them has something to say on this issue.
> 
> 2.) While I felt like this practice (if it happened the way I assumed) is
> inappropriate, is there a consensus from others that that is the case? If
> so, is there any interest in adding requirements to Mozilla's Policy about
> handling of information from problem reports received by CAs?
> 
> I do recall a discussion a while back on this list where a reporter had
> their information forwarded on to the certificate owner and got
> unpleasant emails in response and was asking whether the CAs were obligated
> to protect the identity of the reporters, but I don't recall any
> conclusions being reached.
> 
> Good Day,
> Benjamin
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: About upcoming limits on trusted certificates

2020-03-11 Thread Paul Walsh via dev-security-policy
Thanks for the clarification, Kathleen. I tried my best not to make 
assumptions. 

- Paul

> On Mar 11, 2020, at 5:28 PM, Kathleen Wilson via dev-security-policy 
>  wrote:
> 
> On 3/11/20 4:37 PM, Paul Walsh wrote:
 On Mar 11, 2020, at 4:11 PM, Kathleen Wilson via dev-security-policy 
  wrote:
>>> 
>>> On 3/11/20 3:51 PM, Paul Walsh wrote:
 Can you provide some insight to why you think a shorter frequency in 
 domain validation would be beneficial?
>> [PW] If the owner’s identity has already been validated and that information 
>> is still valid, why ask them to validate again? 
> 
> 
> By "domain validation" I specifically mean verifying that the certificate 
> requestor owns/controls the domain name(s) to be included in the TLS 
> certificate.
> 
> 
>> [PW] I believe it’s a good idea to ensure they’re still in control of the 
>> domain. 
> 
> 
> So I guess we are in agreement on this.
> 
> 
>> My comment is in relation to the cost of validating their identity.
> 
> 
> My proposal has nothing to do with identity validation.
> 
> 
> 
>> [PW] Thanks for this info. If this is already part of the CA/B Forum, is it 
>> your intention to potentially do something different/specific for Firefox, 
>> irrespective of what happens in that forum?
> 
> 
> My proposal is that if we are going to update Mozilla's policy to require TLS 
> certs to have validity period of 398 days or less, we should also update 
> Mozilla's policy to say that re-use of domain validation is only valid up to 
> 398 days. i.e. the ownership/control of the domain name should be 
> re-validated before the renewal cert is issued.
> 
> Currently Mozilla's policy and the BRs allow the CA to re-use domain 
> validation results for up to 825 days. (which is inline with the 825 day 
> certificate validity period currently allowed by the BRs)
> 
> Kathleen
> 
> 
> 
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: About upcoming limits on trusted certificates

2020-03-11 Thread Paul Walsh via dev-security-policy

> On Mar 11, 2020, at 4:11 PM, Kathleen Wilson via dev-security-policy 
>  wrote:
> 
> On 3/11/20 3:51 PM, Paul Walsh wrote:
>> Can you provide some insight to why you think a shorter frequency in domain 
>> validation would be beneficial? 
> 
> To start with, it is common for a domain name to be purchased for one year. A 
> certificate owner that was able to prove ownership/control of the domain name 
> last year might not have renewed the domain name. So why should they be able 
> to get a renewal cert without having that re-checked?

[PW] I look at it differently. If the owner’s identity has already been 
validated and that information is still valid, why ask them to validate again? 
I would like to see the time, effort and cost to website owners reduced where 
possible - without increasing risk from a security perspective. 

That’s my response to your specific question, but what about domains that are 
purchased for longer durations? 

Given that you raised this topic, I believe the onus should be on you to 
demonstrate why it’s a good idea, not for me or others to demonstrate why it’s 
not a good idea :) I’m simply asking questions to learn more about your 
perspective. I’m on the fence until I hear of good reasons to change something 
that might not be broken.


> 
> 
>> At the very least it deserves a new thread as the potential impact could be 
>> significant.
> 
> What exactly do you think is the significant impact in regards to 
> re-verifying that the certificate requestor still has control of the domain 
> name to be included in the new certificate?

[PW] I believe it’s a good idea to ensure they’re still in control of the 
domain. My comment is in relation to the cost of validating their identity. Any 
change that you propose and which is accepted, will have an impact on website 
owners - however small we might think, it might not be small to them. 

I specifically use the term “website owners” to humanize the conversation. It’s 
not about “domains”, it’s about people who have to pay for extra things that we 
as stakeholders and guests of the web, ask of them. Or in this case, tell them. 
I’d love to hear what CAs think as they’re the ones who know what website 
owners want more than any other stakeholder. 

> 
> 
>> And out of curiosity, why not raise your question inside the CA/Browser 
>> forum if you believe the original change being discussed should have been 
>> brought up there? I believe the potential outcome would have a separate 
>> impact on CAs and website owners. In particular, it would cost website 
>> owners in more time, resource and money. For this reason, I’m assuming 
>> you’re not asking the question to simply line up with another change.
> 
> It was part of the CAB Forum Ballot SC22 that was proposed last year by 
> Google. That ballot was to change both the cert validity period and the 
> validation information to 398 days.
> "| 2020-03-01 | 4.2.1 and 6.3.2 | Certificates issued SHOULD NOT have a 
> Validity Period greater than 397 days and MUST NOT have a Validity Period 
> greater than 398 days. Re-use of validation information limited to 398 days. 
> |"
> 
> 
> Reference:
> https://cabforum.org/pipermail/servercert-wg/2019-August/000894.html
> https://github.com/cabforum/documents/compare/master...sleevi:0a72b35f7c877e6aa1e7559f712ad9eb84b2da12?diff=split#diff-7f6d14a20e7f3beb696b45e1bf8196f2

[PW] Thanks for this info. If this is already part of the CA/B Forum, is it 
your intention to potentially do something different/specific for Firefox, 
irrespective of what happens in that forum? 

I’m trying to learn more about your intent and the benefits as you perceive 
them, it’s not to debate, as I don’t have an opinion on whether it’s a good or 
bad thing. 

Thanks,
Paul

> 
> 
> Thanks,
> Kathleen
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: About upcoming limits on trusted certificates

2020-03-11 Thread Paul Walsh via dev-security-policy
Hi Kathleen,

Can you provide some insight to why you think a shorter frequency in domain 
validation would be beneficial? At the very least it deserves a new thread as 
the potential impact could be significant. 

And out of curiosity, why not raise your question inside the CA/Browser forum 
if you believe the original change being discussed should have been brought up 
there? I believe the potential outcome would have a separate impact on CAs and 
website owners. In particular, it would cost website owners in more time, 
resource and money. For this reason, I’m assuming you’re not asking the 
question to simply line up with another change.

Thanks,
Paul


> On Mar 11, 2020, at 3:39 PM, Kathleen Wilson via dev-security-policy 
>  wrote:
> 
> All,
> 
> First, I would like to say that my preference would have been for this type 
> of change (limit SSL cert validity period to 398 days) to be agreed to in the 
> CA/Browser Forum and added to the BRs. However, the ball is already rolling, 
> and discussion here in m.d.s.p is supportive of updating Mozilla's Root Store 
> Policy to incorporate the shorter validity period. So...
> 
> What do you all think about also limiting the re-use of domain validation?
> 
> BR section 3.2.2.4 currently says: "Completed validations of Applicant 
> authority may be valid for the issuance of multiple Certificates over time."
> And BR section 4.2.1 currently says: "The CA MAY use the documents and data 
> provided in Section 3.2 to verify certificate information, or may reuse 
> previous validations themselves, provided that the CA obtained the data or 
> document from a source specified under Section 3.2 or completed the 
> validation itself no more than 825 days prior to issuing the Certificate."
> 
> In line with that, section 2.1 of Mozilla's Root Store Policy currently says:
> "CAs whose certificates are included in Mozilla's root program MUST: ...
> "5. verify that all of the information that is included in SSL certificates 
> remains current and correct at time intervals of 825 days or less;"
> 
> When we update Mozilla's Root Store Policy, should we shorten the domain 
> validation frequency to be in line with the shortened certificate validity 
> period? i.e. change item 5 in section 2.1 of Mozilla's Root Store Policy to:
> "5. limit the validity period and re-use of domain validation for SSL 
> certificates to 398 days or less if the certificate is issued on or after 
> September 1, 2020;"
> 
> I realize that in order to enforce shorter frequency in domain validation we 
> will need to get this change into the BRs and into the audit criteria. But 
> CAs are expected to follow Mozilla's Root Store Policy regardless of 
> enforcement mechanisms, and having this in our policy would make Mozilla's 
> intentions clear.
> 
> As always, I will greatly appreciate your thoughtful and constructive input 
> on this.
> 
> Thanks,
> Kathleen
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Firefox removes UI for site identity

2019-10-29 Thread Paul Walsh via dev-security-policy

> On Oct 29, 2019, at 12:03 PM, James Burton  wrote:
> 
> Correction: 
> 
> This isn't throwing insults at each other, it's about improving web security 
> and not directing people to the wrong conclusions which the CA Security 
> Council has done which is bad for the improvement of web security. 

[PW] Allow me to add something too :)

If browser vendors implemented website identity UI/UX in a meaningful way, 
making it easy for consumers to understand and use, I put forward that phishing 
attacks would take a nose dive. All the data points in my article bring me to 
this conclusion. In future I hope people can debate people’s conclusions with 
new/better data points. 

This would lead to everyone seeing massive value from CAs. It’s a very strange 
situation. I don’t see anyone complaining about browser vendors removing UI 
without trying to improve it. 

EV would then become an attack vector imo, so that would need to be tightened. 
Right now they’re using Let’s Encrypt free DV certs because of the wrong 
impression given by old and now, new browser UI. “Not secure” is going to make 
things better for privacy BUT much worse for safety. 

- Paul


> 
> Thank you
> 
> Burton
> 
> On Tue, Oct 29, 2019 at 6:56 PM James Burton  <mailto:j...@0.me.uk>> wrote:
> 
> 
> On Tue, Oct 29, 2019 at 6:29 PM Paul Walsh  <mailto:p...@metacert.com>> wrote:
> 
>> On Oct 29, 2019, at 11:17 AM, James Burton > <mailto:j...@0.me.uk>> wrote:
>> 
>> Hi Paul,
>> 
>> I take the view that the articles on the CA Security Council website are a 
>> form of marketing gimmick with no value whatsoever.
> 
> [PW] More useless feedback that only serves to insult someone trying their 
> best to add value. As I’ve said *over and over again*, if browser vendors did 
> what I recommended in the article, my own company's flagship product would be 
> rendered useless. If you call that “a form of marketing gimmick" you should 
> probably avoid going into marketing. 
> 
> When I read the CA Security Council website around the two year mark, I found 
> the content more directed toward the marketing end to help CAs promote 
> expensive products such as extended validation certificates. My opinion on 
> the matter hasn't changed. This isn't  throwing insults at each other, it's 
> about improving web security and directing people to the wrong conclusions 
> which the CA Security Council has done is bad for the improvement of web 
> security. 
> 
> 
> Every data point was taken from a competitor with links to their work. If you 
> disagree with my conclusions, say so. But throwing insults is hardly adding 
> value, is it? 
> 
> - Paul
> 
>> 
>> Thank you
>> 
>> Burton 
>> 
>> On Tue, Oct 29, 2019 at 5:55 PM Paul Walsh via dev-security-policy 
>> > <mailto:dev-security-policy@lists.mozilla.org>> wrote:
>> Hi Nick,
>> 
>> > On Oct 29, 2019, at 7:07 AM, Nick Lamb > > <mailto:n...@tlrmx.org>> wrote:
>> > 
>> > On Mon, 28 Oct 2019 16:19:30 -0700
>> > Paul Walsh via dev-security-policy
>> > > > <mailto:dev-security-policy@lists.mozilla.org>> wrote:
>> >> If you believe the visual indicator has little or no value why did
>> >> you add it? 
>> > 
>> > The EV indication dates back to the creation of Extended Validation,
>> > and so the CA/Browser forum, which is well over a decade ago now.
>> > 
>> > But it inherits its nature as a positive indicator from the SSL
>> > padlock, which dates back to the mid-1990s when Netscape developed SSL.
>> > At the time there was not yet a clear understanding that negative
>> > indicators were the Right Thing™, and because Tim's toy hypermedia
>> > system didn't have much security built in there was a lot of work to
>> > do to get from there to here.
>> > 
>> > Plenty of other bad ideas date back to the 1990s, such as PGP's "Web of
>> > Trust". I doubt that Wayne can or should answer for bad ideas just
>> > because he's now working on good ideas.
>> 
>> [PW] I agree with your conclusion. But you’re commenting on the wrong thing. 
>> You snipped my message so much that my comment above is without context. You 
>> snipped it in a way that a reader will think I’m asking about the old visual 
>> indicators for identity - I’m not. I asked Wayne if he thinks the new 
>> Firefox visual indicator for tracking is unnecessary. 
>> 
>> I don’t want to labour my points any more. Those who disagree and took the 
>> time to comment, aren’t willing to exchange meaningful, constructive, 
>> respec

Re: [FORGED] Firefox removes UI for site identity

2019-10-29 Thread Paul Walsh via dev-security-policy

> On Oct 29, 2019, at 11:56 AM, James Burton  wrote:
> 
> 
> 
> On Tue, Oct 29, 2019 at 6:29 PM Paul Walsh  <mailto:p...@metacert.com>> wrote:
> 
>> On Oct 29, 2019, at 11:17 AM, James Burton > <mailto:j...@0.me.uk>> wrote:
>> 
>> Hi Paul,
>> 
>> I take the view that the articles on the CA Security Council website are a 
>> form of marketing gimmick with no value whatsoever.
> 
> [PW] More useless feedback that only serves to insult someone trying their 
> best to add value. As I’ve said *over and over again*, if browser vendors did 
> what I recommended in the article, my own company's flagship product would be 
> rendered useless. If you call that “a form of marketing gimmick" you should 
> probably avoid going into marketing. 
> 
> When I read the CA Security Council website around the two year mark, I found 
> the content more directed toward the marketing end to help CAs promote 
> expensive products such as extended validation certificates. My opinion on 
> the matter hasn't changed. This isn't  throwing insults at each other, it's 
> about improving web security and directing people to the wrong conclusions 
> which the CA Security Council has done is bad for the improvement of web 
> security. 

[PW] I think EV is expensive, time consuming and complicated. I think some CAs 
were, and continue to be overzealous in their marketing efforts by over selling 
the benefits of EV from a browser UI perspective. I also think the verification 
process can now be further improved with new blockchain-based KYC 
tech/processes. 

Some CAs are better than others - just like companies in every sector. I hope 
you and others will see that I’m completely unbiased in my personal opinions. I 
sit in the middle. And I hope people will find the time and energy to read my 
words and not read in between the lines. 

I have nothing against CAs making a lot of money when they add value. And I 
think they can add massive value - but that value can only be derived by 
browsers and other software applications that make use of their certs in a more 
meaningful way in the future. We should be questioning browser vendors here, 
not CAs. CAs are doing their bit for identity.

I’ve had many conversations with CAs over the past few months and their hearts 
are in the right place. They are trying just like the rest of us, to add value 
to society while generating revenue. Nothing is free. Either you pay for a 
product or you are the product. We all know this.

Firefox still uses Google as the default search engine even though Google is 
the least privacy-respecting search engine in the eyes of many. If Mozilla 
could build a sustainable model that didn’t involve revenue from Google it 
would probably consider using duckduckgo.com as it’s primary search engine.

Thanks for taking the time to say what you really think so we can get to the 
heart of the problem. Perceptions are important. Let’s try to look beyond the 
perceptions. I don’t trust Google’s motives, but I will take the time to read 
what they say and question specifics, rather than tarnish them with a brush.

- Paul

> 
> 
> Every data point was taken from a competitor with links to their work. If you 
> disagree with my conclusions, say so. But throwing insults is hardly adding 
> value, is it? 
> 
> - Paul
> 
>> 
>> Thank you
>> 
>> Burton 
>> 
>> On Tue, Oct 29, 2019 at 5:55 PM Paul Walsh via dev-security-policy 
>> > <mailto:dev-security-policy@lists.mozilla.org>> wrote:
>> Hi Nick,
>> 
>> > On Oct 29, 2019, at 7:07 AM, Nick Lamb > > <mailto:n...@tlrmx.org>> wrote:
>> > 
>> > On Mon, 28 Oct 2019 16:19:30 -0700
>> > Paul Walsh via dev-security-policy
>> > > > <mailto:dev-security-policy@lists.mozilla.org>> wrote:
>> >> If you believe the visual indicator has little or no value why did
>> >> you add it? 
>> > 
>> > The EV indication dates back to the creation of Extended Validation,
>> > and so the CA/Browser forum, which is well over a decade ago now.
>> > 
>> > But it inherits its nature as a positive indicator from the SSL
>> > padlock, which dates back to the mid-1990s when Netscape developed SSL.
>> > At the time there was not yet a clear understanding that negative
>> > indicators were the Right Thing™, and because Tim's toy hypermedia
>> > system didn't have much security built in there was a lot of work to
>> > do to get from there to here.
>> > 
>> > Plenty of other bad ideas date back to the 1990s, such as PGP's "Web of
>> > Trust". I doubt that Wayne can or should answer for bad ideas just
>> > because he's now wor

Re: [FORGED] Firefox removes UI for site identity

2019-10-29 Thread Paul Walsh via dev-security-policy

> On Oct 29, 2019, at 11:17 AM, James Burton  wrote:
> 
> Hi Paul,
> 
> I take the view that the articles on the CA Security Council website are a 
> form of marketing gimmick with no value whatsoever.

[PW] More useless feedback that only serves to insult someone trying their best 
to add value. As I’ve said *over and over again*, if browser vendors did what I 
recommended in the article, my own company's flagship product would be rendered 
useless. If you call that “a form of marketing gimmick" you should probably 
avoid going into marketing. 

Every data point was taken from a competitor with links to their work. If you 
disagree with my conclusions, say so. But throwing insults is hardly adding 
value, is it? 

- Paul

> 
> Thank you
> 
> Burton 
> 
> On Tue, Oct 29, 2019 at 5:55 PM Paul Walsh via dev-security-policy 
>  <mailto:dev-security-policy@lists.mozilla.org>> wrote:
> Hi Nick,
> 
> > On Oct 29, 2019, at 7:07 AM, Nick Lamb  > <mailto:n...@tlrmx.org>> wrote:
> > 
> > On Mon, 28 Oct 2019 16:19:30 -0700
> > Paul Walsh via dev-security-policy
> >  > <mailto:dev-security-policy@lists.mozilla.org>> wrote:
> >> If you believe the visual indicator has little or no value why did
> >> you add it? 
> > 
> > The EV indication dates back to the creation of Extended Validation,
> > and so the CA/Browser forum, which is well over a decade ago now.
> > 
> > But it inherits its nature as a positive indicator from the SSL
> > padlock, which dates back to the mid-1990s when Netscape developed SSL.
> > At the time there was not yet a clear understanding that negative
> > indicators were the Right Thing™, and because Tim's toy hypermedia
> > system didn't have much security built in there was a lot of work to
> > do to get from there to here.
> > 
> > Plenty of other bad ideas date back to the 1990s, such as PGP's "Web of
> > Trust". I doubt that Wayne can or should answer for bad ideas just
> > because he's now working on good ideas.
> 
> [PW] I agree with your conclusion. But you’re commenting on the wrong thing. 
> You snipped my message so much that my comment above is without context. You 
> snipped it in a way that a reader will think I’m asking about the old visual 
> indicators for identity - I’m not. I asked Wayne if he thinks the new Firefox 
> visual indicator for tracking is unnecessary. 
> 
> I don’t want to labour my points any more. Those who disagree and took the 
> time to comment, aren’t willing to exchange meaningful, constructive, 
> respectful counter arguments. Those who disagree but aren’t commenting, may 
> or may not care at all. And those who agree mostly show their support in 
> private. I feel like this conversation is sucking up all the oxygen as a 
> result.
> 
> If we are all doing such a great job, attacks wouldn’t be on the rise and 
> phishing wouldn’t be the number 1 problem. And we all know phishing is where 
> a user falls for a deceptive website. 
> 
> One last time, here’s the article I wrote with many data points 
> https://casecurity.org/2019/10/10/the-insecure-elephant-in-the-room/ 
> <https://casecurity.org/2019/10/10/the-insecure-elephant-in-the-room/> 
> <https://casecurity.org/2019/10/10/the-insecure-elephant-in-the-room/ 
> <https://casecurity.org/2019/10/10/the-insecure-elephant-in-the-room/>> 
> 
> I’m going to edit this article for Hackernoon, to include additional context 
> about my support *for*encryption, https, padlock and free DV certs. I support 
> them all, obviously. But some people assume I don’t support these critical 
> elements because I pointed out the negative impact that their implementation 
> is having.
> 
> Thanks,
> - Paul
> 
> > 
> > Nick.
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org 
> <mailto:dev-security-policy@lists.mozilla.org>
> https://lists.mozilla.org/listinfo/dev-security-policy 
> <https://lists.mozilla.org/listinfo/dev-security-policy>

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Firefox removes UI for site identity

2019-10-29 Thread Paul Walsh via dev-security-policy
Hi Nick,

> On Oct 29, 2019, at 7:07 AM, Nick Lamb  wrote:
> 
> On Mon, 28 Oct 2019 16:19:30 -0700
> Paul Walsh via dev-security-policy
>  wrote:
>> If you believe the visual indicator has little or no value why did
>> you add it? 
> 
> The EV indication dates back to the creation of Extended Validation,
> and so the CA/Browser forum, which is well over a decade ago now.
> 
> But it inherits its nature as a positive indicator from the SSL
> padlock, which dates back to the mid-1990s when Netscape developed SSL.
> At the time there was not yet a clear understanding that negative
> indicators were the Right Thing™, and because Tim's toy hypermedia
> system didn't have much security built in there was a lot of work to
> do to get from there to here.
> 
> Plenty of other bad ideas date back to the 1990s, such as PGP's "Web of
> Trust". I doubt that Wayne can or should answer for bad ideas just
> because he's now working on good ideas.

[PW] I agree with your conclusion. But you’re commenting on the wrong thing. 
You snipped my message so much that my comment above is without context. You 
snipped it in a way that a reader will think I’m asking about the old visual 
indicators for identity - I’m not. I asked Wayne if he thinks the new Firefox 
visual indicator for tracking is unnecessary. 

I don’t want to labour my points any more. Those who disagree and took the time 
to comment, aren’t willing to exchange meaningful, constructive, respectful 
counter arguments. Those who disagree but aren’t commenting, may or may not 
care at all. And those who agree mostly show their support in private. I feel 
like this conversation is sucking up all the oxygen as a result.

If we are all doing such a great job, attacks wouldn’t be on the rise and 
phishing wouldn’t be the number 1 problem. And we all know phishing is where a 
user falls for a deceptive website. 

One last time, here’s the article I wrote with many data points 
https://casecurity.org/2019/10/10/the-insecure-elephant-in-the-room/ 
<https://casecurity.org/2019/10/10/the-insecure-elephant-in-the-room/> 

I’m going to edit this article for Hackernoon, to include additional context 
about my support *for*encryption, https, padlock and free DV certs. I support 
them all, obviously. But some people assume I don’t support these critical 
elements because I pointed out the negative impact that their implementation is 
having.

Thanks,
- Paul

> 
> Nick.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Firefox removes UI for site identity

2019-10-28 Thread Paul Walsh via dev-security-policy

> On Oct 28, 2019, at 3:39 PM, Wayne Thayer  wrote:
> 
> Hi Paul,
> 
> On Mon, Oct 28, 2019 at 2:41 PM Paul Walsh via dev-security-policy 
>  <mailto:dev-security-policy@lists.mozilla.org>> wrote:
>  
> [PW] So you dislike Mozilla’s implementation for the tracker icon in the 
> address bar? When you update to 70.0 you’re prompted with an educational-type 
> pop-out to draw your attention to the visual indicator. Do you think that’s a 
> bad idea? Do you think users should just know how to use browser software? 
> 
> 
> This repeated comparison of the EV indicator to the privacy shield is apples 
> to orangutans. The security and privacy of a Firefox user doesn't depend on 
> them interacting with the privacy shield. If a user never notices the privacy 
> shield, that user will be as secure as one who examines it on every page 
> load. It follows that there is no need for users to be properly trained to 
> interact with the privacy shield to protect themselves. This gets to the root 
> of the problem with the EV UI as a positive security indicator.

[PW] Good point in regards to the fact that users are better protected even if 
they’re not aware of it. 

If you believe the visual indicator has little or no value why did you add it? 

Also, Mozilla has not conducted, or referenced recent research to prove that 
well designed UI can't work. Only that previous implementations didn’t work. 
There’s no need to do that as we are on in agreement on this point. 

- Paul


> 
> - Wayne

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Firefox removes UI for site identity

2019-10-28 Thread Paul Walsh via dev-security-policy

> On Oct 28, 2019, at 2:12 PM, James Burton  wrote:
> 
> [PW] Phil knows more about the intent so I’ll defer to his response at the 
> end of this thread. I would like to add that computer screens bigger than 
> mobile devices aren’t going away. So focusing only on mobile isn’t a good 
> idea. 
> 
> Thanks for the constructive conversation James, finally :) But I don’t 
> necessarily agree with your assertion about there being a lack of room to 
> support identity. It all comes down to priority as you know. We could have 
> said that Firefox mobile didn’t have enough room for tracking icons/settings 
> before it was implemented - but because Mozilla feels this is important, they 
> made the room. They made assertions about the lack of real estate for 
> identity prior to implementing visual indicators for tracking. 
> 
> Mozilla once asserted that it wouldn’t implement any filtering 
> tools/preferences for any reason because it was considered “censorship”. They 
> have clearly changed their position - thankfully, with the filters for 
> trackers/ads. 
> 
> Mozilla dropped its mobile browser strategy completely for a long period of 
> time, but the team is now focused on mobile again. So things do change with 
> time and realization of market conditions and mistakes. Everyone makes 
> mistakes.
> 
>> 
>> It's right that we are removing the extended validation style visual 
>> security indicator from browsers because of a) the above statement b)
> 
> One could argue that there’s less room inside an app WebView - where there's 
> so much inconsistency it hurts my head. Here’s an example of a design 
> implementation that *might* work to help demonstrate my point about there 
> being enough room - it’s not ideal but I only spent 5 minutes on it. [1] 
> 
> I took a look at your concept of an extended validation type visual security 
> indicator and the conclusion is that it doesn't provide any assurance to the 
> users that the website is vetted or trustworthy. This concept is similar to 
> the padlock visual security indicator and that too doesn't provide any 
> assurance to the users that the website is vetted or trustworthy. The padlock 
> visual security indicator only provides the user a visual indication that the 
> connection is encrypted.

[PW] You’re getting hooked on an icon. Please don’t do that. I’m just showing 
you that it’s possible to find real estate. You said there was no room. I 
proved there is. So how about you either admit to being wrong, or explain why 
I’m wrong instead of commenting on the shape, size and color of an icon. I’m 
shrugging my shoulders at your reply. 

Separately, this particular visual indicator worked and continues to work for 
us - but again, let’s not debate on the design elements.

> 
> Read Emily Stark's Twitter response regarding Chrome and the removal of the 
> padlock visual security indicator: 
> https://twitter.com/estark37/status/1183769863841386496?s=20 
> 
> 
> 
>> normal users don't understand extended validation style visual security 
>> indicators c)
> 
> Because they were never educated properly - UX sucked more than anything. But 
> you don’t just remove something without iterating to achieve product/market 
> fit. That’s what happened with identity.
>  
> Users shouldn't have to go through education lessons to recognise different 
> positive visual security indicators. Its a stupid idea.

[PW] So you dislike Mozilla’s implementation for the tracker icon in the 
address bar? When you update to 70.0 you’re prompted with an educational-type 
pop-out to draw your attention to the visual indicator. Do you think that’s a 
bad idea? Do you think users should just know how to use browser software? 

> 
> Next stupid idea will be expecting users to go through a compulsory exam to 
> learn about the different positive visual security indicators. 

[PW] That’s pretty insulting but I’ve come to expect that from people who 
disagree with me on this list. I don’t see anyone else contributing in any way, 
in regards to how we can address this problem through collaboration. All I hear 
is childish screaming; “EV is broken” - it’s like a broken record with zero 
data. We know old implementations were crap. But that’s like saying the first 
version of the seatbelt was flawed, so it shouldn’t’ have progressed through 
design iteration to make it work. 

BW Brave has an indicator for shields - seems to work pretty well. That’s a 
type of security that requires user education. But with good UI/UX it’s 
possible to get it right - which is why I guess Brave is taking market share 
from Firefox and Chrome and will continue to do so as it does some things 
better.

> If failed, they can't purchase goods online. If passed, they get a license 
> issued to allow them to purchase goods online. 
> 
> Browsers iterating positive visual security indicators to achieve 
> product/market fit is another stupid idea. It's 

Re: [FORGED] Firefox removes UI for site identity

2019-10-28 Thread Paul Walsh via dev-security-policy
On Oct 25, 2019, at 7:56 AM, Phillip Hallam-Baker  wrote:
> 
> 
> 
> On Fri, Oct 25, 2019 at 4:21 AM James Burton  > wrote:
> Extended validation was introduced at a time when mostly everyone browsed the 
> internet using low/medium resolution large screen devices that provided the 
> room for an extended validation style visual security indicator . Everything 
> has moved on and purchases are made on small screen devices that has no room 
> to support an extended validation style visual security indicator. Apple 
> supported  extended validation style visual security indicator in iOS browser 
> and it failed [1] [2].

[PW] Phil knows more about the intent so I’ll defer to his response at the end 
of this thread. I would like to add that computer screens bigger than mobile 
devices aren’t going away. So focusing only on mobile isn’t a good idea. 

Thanks for the constructive conversation James, finally :) But I don’t 
necessarily agree with your assertion about there being a lack of room to 
support identity. It all comes down to priority as you know. We could have said 
that Firefox mobile didn’t have enough room for tracking icons/settings before 
it was implemented - but because Mozilla feels this is important, they made the 
room. They made assertions about the lack of real estate for identity prior to 
implementing visual indicators for tracking. 

Mozilla once asserted that it wouldn’t implement any filtering 
tools/preferences for any reason because it was considered “censorship”. They 
have clearly changed their position - thankfully, with the filters for 
trackers/ads. 

Mozilla dropped its mobile browser strategy completely for a long period of 
time, but the team is now focused on mobile again. So things do change with 
time and realization of market conditions and mistakes. Everyone makes mistakes.

> 
> It's right that we are removing the extended validation style visual security 
> indicator from browsers because of a) the above statement b)

One could argue that there’s less room inside an app WebView - where there's so 
much inconsistency it hurts my head. Here’s an example of a design 
implementation that *might* work to help demonstrate my point about there being 
enough room - it’s not ideal but I only spent 5 minutes on it. [1] 

> normal users don't understand extended validation style visual security 
> indicators c)

Because they were never educated properly - UX sucked more than anything. But 
you don’t just remove something without iterating to achieve product/market 
fit. That’s what happened with identity.

> the inconsistencies of extended validation style visual security indicator 
> between browsers d) users can't tell who is real or not based on extended 
> validation style visual security indicators as company names sometimes don't 
> match the actual site name. 

I agree. This is why they should have been improved instead of removed. Mozilla 
will likely iterated the UI/UX around tracking to improve adoption.

Ian, like every other commentator I’ve read on this subject, say things that I 
agree with. But their conclusions and proposals are completely flawed in my 
opinion. As I’ve said before, you don’t just remove something that doesn’t see 
major adoption - you iterate/test. You’d only remove UI if you know for sure 
that it can’t be improved - there’s no data to suggest that any research was 
done around this. Mozilla have only supplied links to research that’s flawed 
and so old it’s useless. I’m blown away by their referencing research from more 
than 10 years ago. Some amazing people on this list weren’t even working with 
web tech back then. 

> 
> [1]  https://www.typewritten.net/writer/ev-phishing 
> 
> [2]  https://stripe.ian.sh 
>  

[PW] [1] https://imgur.com/Va4heuo

- Paul




> The original proposal that led to EV was actually to validate the company 
> logos and present them as logotype.
> There was a ballot proposed here to bar any attempt to even experiment with 
> logotype. This was withdrawn after I pointed out to Mozilla staff that there 
> was an obvious anti-Trust concern in using the threat of withdrawing roots 
> from a browser with 5% market share to suppress deployment of any feature.
> 
> Now for the record, that is what a threat looks like: we will destroy your 
> company if you do not comply with our demands. Asking to contact the Mozilla 
> or Google lawyers because they really need to know what one of their 
> employees is doing is not.
> 
> Again, the brief here is to provide security signals that allow the user to 
> protect themselves.
> 
> -- 
> Website: http://hallambaker.com/ 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Firefox removes UI for site identity

2019-10-24 Thread Paul Walsh via dev-security-policy
Apologies for the massive number of typos. I was angry when I read the response 
to my thoughtful messages. I tried my best to hold back. I didn’t even have the 
energy to check what I’d written before hitting send. 



> On Oct 24, 2019, at 7:37 PM, Paul Walsh  wrote:
> 
> 
>> On Oct 24, 2019, at 6:53 PM, Peter Gutmann  wrote:
>> 
>> Paul Walsh via dev-security-policy  
>> writes:
>> 
>>> we conducted the same research with 85,000 active users over a period of 
>>> 12 months
>> 
>> As I've already pointed out weeks ago when you first raised this, your
>> marketing department conducted a survey of EV marketing effectiveness.  
> 
> [PW] With respect Peter, you articulate your opinion doesn’t make it a matter 
> of fact. Read the article properly and you will see that it’s not from a 
> marketing department. It’s a small startup that wanted to conduct a social 
> experiment. 
> 
>> If
>> you have a refereed, peer-reviewed study published at a conference or in 
>> an academic journal, please reference it, not a marketing survey 
>> masquerading as a "study”.
> 
> Rubbish. We don’t need to publish at a conference or in an academic journal 
> for it to demonstrate a point. If *you* don’t want to trust it, that’s ok. I 
> don’t expect everyone to trust everything that is written.
> 
> As Homer Simpson said; “70% of all reports are made up”. 
> 
> Our work is not marketing - you obviously didn’t read the methodology and the 
> reasons or you wouldn’t make such silly comments. 
> 
>> 
>> A second suggestion, if you don't want to publish any research (by which I
>> mean real research, not rent-seeking CA marketing) supporting your position, 
> 
> Did you read any of the words I wrote? I’ve said more than once that I don’t 
> work for a CA - never have. You’re obviously a CA-hater and hate everything 
> that’s ever discussed about website identity. Haters are gonna hate. I 
> couldn’t be more impartial.
> 
> 
>> is that you fork Firefox - it is after all an open-source product - add 
>> whatever EV UI you like to it, and publish it as an alternative to Firefox.  
>> If your approach works as you claim, it'll be so obviously superior to 
>> Firefox that everyone will go with your fork rather than the original.
> 
> Another weird comment. Forking code and building products doesn’t mean people 
> will use it. I have nothing to prove to anyone. If all the browser vendors 
> did as I suggest it would mean there’s no need for our flagship product. So 
> how on earth could I be biased. My commentary or counter productive for my 
> shareholders and team. But I care about what’s in the best of industry. You 
> clearly don’t because you need to have the word “Google” or “Stanford” 
> stamped on a PDF. None of the authors of any of those documents come close to 
> the level of experience that my team and I have - including our industry 
> contributions. I was the first person to ever re-write Tim Berner’s Lee’s 
> vision of the “one web” when I co-founded the Mobile Web Initiative. I 
> shouldn’t have to throw these things around just to appease you. Do your 
> research if you actually care.
> 
>> 
>> For everyone else who feels this interminable debate has already gone on
>> far too long and I'm not helping it, yeah, sorry, I'd consigned the thread 
>> to the spam folder for awhile, had a brief look back, and saw this, which 
>> indicates it's literally gone nowhere in about a month.
> 
> Go play in your spam folder for a little longer because I’m done responding 
> to your insults. You didn’t question anything outside our intent which is to 
> question my integrity. I won’t accept that - it’s as insulting as it gets.
> 
>> 
>> I can see why Mozilla avoided this endless broken-record discussion, it's
>> not contributing anything but just going round and round in circles.
> 
> It’s going around in circles because you refuse to take the time and effort 
> to read what has been written. Instead, you assume we have ulterior motives. 
> As I’ve said, my motives are not necessarily in the best interest of my 
> company. 
> 
> - Paul
> 
>> 
>> Peter.
> 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Firefox removes UI for site identity

2019-10-24 Thread Paul Walsh via dev-security-policy

> On Oct 24, 2019, at 6:53 PM, Peter Gutmann  wrote:
> 
> Paul Walsh via dev-security-policy  
> writes:
> 
>> we conducted the same research with 85,000 active users over a period of 
>> 12 months
> 
> As I've already pointed out weeks ago when you first raised this, your
> marketing department conducted a survey of EV marketing effectiveness.  

[PW] With respect Peter, you articulate your opinion doesn’t make it a matter 
of fact. Read the article properly and you will see that it’s not from a 
marketing department. It’s a small startup that wanted to conduct a social 
experiment. 

> If
> you have a refereed, peer-reviewed study published at a conference or in 
> an academic journal, please reference it, not a marketing survey 
> masquerading as a "study”.

Rubbish. We don’t need to publish at a conference or in an academic journal for 
it to demonstrate a point. If *you* don’t want to trust it, that’s ok. I don’t 
expect everyone to trust everything that is written.

As Homer Simpson said; “70% of all reports are made up”. 

Our work is not marketing - you obviously didn’t read the methodology and the 
reasons or you wouldn’t make such silly comments. 

> 
> A second suggestion, if you don't want to publish any research (by which I
> mean real research, not rent-seeking CA marketing) supporting your position, 

Did you read any of the words I wrote? I’ve said more than once that I don’t 
work for a CA - never have. You’re obviously a CA-hater and hate everything 
that’s ever discussed about website identity. Haters are gonna hate. I couldn’t 
be more impartial.


> is that you fork Firefox - it is after all an open-source product - add 
> whatever EV UI you like to it, and publish it as an alternative to Firefox.  
> If your approach works as you claim, it'll be so obviously superior to 
> Firefox that everyone will go with your fork rather than the original.

Another weird comment. Forking code and building products doesn’t mean people 
will use it. I have nothing to prove to anyone. If all the browser vendors did 
as I suggest it would mean there’s no need for our flagship product. So how on 
earth could I be biased. My commentary or counter productive for my 
shareholders and team. But I care about what’s in the best of industry. You 
clearly don’t because you need to have the word “Google” or “Stanford” stamped 
on a PDF. None of the authors of any of those documents come close to the level 
of experience that my team and I have - including our industry contributions. I 
was the first person to ever re-write Tim Berner’s Lee’s vision of the “one 
web” when I co-founded the Mobile Web Initiative. I shouldn’t have to throw 
these things around just to appease you. Do your research if you actually care.

> 
> For everyone else who feels this interminable debate has already gone on
> far too long and I'm not helping it, yeah, sorry, I'd consigned the thread 
> to the spam folder for awhile, had a brief look back, and saw this, which 
> indicates it's literally gone nowhere in about a month.

Go play in your spam folder for a little longer because I’m done responding to 
your insults. You didn’t question anything outside our intent which is to 
question my integrity. I won’t accept that - it’s as insulting as it gets.

> 
> I can see why Mozilla avoided this endless broken-record discussion, it's
> not contributing anything but just going round and round in circles.

It’s going around in circles because you refuse to take the time and effort to 
read what has been written. Instead, you assume we have ulterior motives. As 
I’ve said, my motives are not necessarily in the best interest of my company. 

- Paul

> 
> Peter.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Firefox removes UI for site identity

2019-10-24 Thread Paul Walsh via dev-security-policy

> On Oct 24, 2019, at 2:59 PM, Julien Vehent via dev-security-policy 
>  wrote:
> 
> On Thursday, October 24, 2019 at 5:31:59 PM UTC-4, Paul Walsh wrote:
>> There is zero data from any company to prove that browser UI for website 
>> identity can’t work.
> 
> https://www.adambarth.com/papers/2007/jackson-simon-tan-barth.pdf

I’ve read this. It’s 13 years old! And consisted of 27 users broken into 
groups. I’m surprised that’s being cited as meaningful research/data in 2019. 
Some participants here weren’t even out of high school back then. I’m jealous.

I don’t know if you read our findings already Julien [1] but we conducted the 
same research with 85,000 active users over a period of 12 months - Chrome, 
Brave, Firefox and Opera. I have documented the entire process along with the 
method used to determine whether or not the visual indicator had achieved 
product/market fit. Our research started in December 2017 and lasted more than 
a year. This same software is now being sold into businesses of different 
sizes. Since it was first released, we have had zero victims of a deceptive 
website. And according to our MSP partners, their support calls and emails are 
massively reduced because when relying on the visual indicator we designed, 
they are less likely to report “suspicious” emails or websites. 

It’s by no means perfect, but when a popular crypto DNS was compromised we 
changed the classification so it was immediately blocked. This is an edge case 
that requires more work.

For context, my engineers were the same people who built the official browser 
add-ons for digg, Delicious, Yahoo!, eBay, PayPal, Google and Microsoft. They 
contributed to Firefox bug fixing and my COO started the Firefox developer 
evangelist community. Our first API for child safety was supposed to be 
integrated with Firefox but weirdly one engineer thought it was censoring the 
web so Chris Hoffman, Mitch and others decided not to proceed.

So, we’re a tiny player, but there are fewer people with more experience in 
browser software, visual indicators and URL Classification. This doesn’t mean 
we’re more right - it just means that our assertions should be taken seriously 
and not disregarded as “vendor marketing”. 

We also built the first ever security integration for native email clients - 
here’s a video demo of link annotation for the Apple Mail client 
https://www.youtube.com/watch?v=elutAAsboyE - visual indicators can and do work 
when done well. 

It was very easy for us to educate users of the visual indicators and it was/is 
easy for them to rely on them. Similar to how I suspect you want users to rely 
on your new UI for tracking. We didn’t even have a website for this product 
until about 3 weeks ago and our on boarding sucks right now.

I would urge you to read about this and feel free to ask me any question you 
like in public or private. Please, when you read it though, assume that I love 
https, free dv certs, the lock and encryption - my article talks about the 
downside in regards to “how” these are being implemented.

Furthermore, my R into visual indicators started in 2004 - before EV was even 
considered by its creators. Every member of the W3C Semantic Web Education & 
Outreach Program (of which I was a member) voted our ‘proof of concept’ add-on 
as one of the most compelling implementations of the Semantic Web 
https://www.w3.org/2001/sw/sweo/public/UseCases/Segala/ 
 I’m highlighting this 
because the data/research we did back then isn’t relevant today - just like the 
research you refer to isn’t relevant today in my opinion. 

Timing and market conditions is everything. In my article I also draw 
conclusions about the relationship between phishing and the other components 
mentioned - using a massive number of data points from various cybersecurity 
companies that face these problems daily.

> 
> "In this paper, we presented a controlled between-subjects evaluation of the 
> extended validation user interface in Internet Explorer 7. Unfortunately, 
> participants who received no training in browser security features did not 
> notice the extended validation indicator and did not outperform the control 
> group.”

If this was true, no browser vendor would be able to release new features for 
anything. That said, browser settings is generally where UX goes to die a slow 
death. But some browser vendors do some things very well - Firefox tracking is 
good. Brave “shields" is probably the best implementation of anti-tracking I’ve 
seen because it’s the main utility. 

> 
> https://storage.googleapis.com/pub-tools-public-publication-data/pdf/400599205ab5a1c9efa03e2a7c127eb8200bf288.pdf
> 
> "We conclude that modern browser identity indicators are not effective.   To 
> design better identity indicators,  we  recommend  that  browsers  consider  
> focusing  on active negative indicators, explore using prominent UI as an 
> opportunity for user 

Re: Firefox removes UI for site identity

2019-10-24 Thread Paul Walsh via dev-security-policy
On Oct 24, 2019, at 12:36 PM, Phillip Hallam-Baker via dev-security-policy 
 wrote:
> 
> Eric,
> 
> I am not going to be gaslighted here.
> 
> Just what was your email supposed to do other than "suppressing dialogue
> within this community"?
> 
> I was making no threat, but if I was still working for a CA, I would
> certainly get the impression that you were threatening me.
> 
> The bullying and unprofessional behavior of a certain individual is one of
> the reasons I have stopped engaging in CABForum, an organization I
> co-founded. My contributions to this industry began in 1992 when I began
> working on the Web with Tim Berners-Lee at CERN.
> 
> 
> The fact that employees who work on what is the third largest browser also
> participate in the technical and policy discussions of the third largest
> browser which is also the only multi-party competitor should be a serious
> concern to Google and Mozilla. It is a clear anti-Trust liability to both
> concerns. People here might think that convenient, but it is not the sort
> of arrangement I for one would like to be having to defend in Congressional
> hearings.
> 
> As I said, I do not make threats. My concern here is that we have lost
> public confidence. We are no longer the heroes we once were and politicians
> in your own party are now running against 'Big Tech'. We already had DoH
> raised in the House this week and there is more to come. We have six months
> at most to put our house in order.

[PW] +1 on everything said by Phil. I particularly like "We are no longer the 
heroes we once were”. The fact that Phil stopped contributing to the CABForum 
due to one bully means industry loses out - I’ve noticed a massive decline in 
participation from many members - some of them for the same reason as I told me 
in private.

I’d like to add that I’ve only met Phil once, when we were both speakers at the 
W3C WWW2006 conference. I showed him a Firefox add-on with visual indicators 
for search engines, and he explained to me the concept of a URL bar that would 
turn green (set aside accessibility challenges with color-only for now) so 
users can avoid counterfeit websites. I was blown away by the idea and by the 
possible implementations with browsers. How could a user possibly fall for a 
deceptive website?! It’s ***2019*** and people falling for deceptive websites 
and dangerous URIs is the #1 problem in cybersecurity - and it’s getting worse.

But alas, browser vendors didn’t design the UI/UX in the way it was expected. 
And instead of iterating the UI/UX based on user feedback until product/market 
fit was achieved, vendors decided to remove it all. And instead of looking 
inward to see what they could have done better, they blame the companies that 
simply provided the information for them to displayed in their UI. 

There is zero data from any company to prove that browser UI for website 
identity can’t work. I could write a white paper on why it didn’t work and why 
it can’t work based on how it *was* implemented. This is not research - this is 
confirmation bias. There isn’t a single successful product or feature that 
didn’t require iteration. 

So, the next time a person says “EV is broken” or “website identity can’t work” 
please think about what I just said and imagine actual browser designers and 
developers who were/are responsible for that work. They were never given a 
chance to get it right.

I don’t work for a CA and never have. But I’m sick and tired of the bullying 
tactics from some individuals who work for major players - it’s toxic.  *Not* 
referring to you Eric :)

If we want to discuss CA marketing/sales and verification processes then let’s 
do that - *separate* to browser UI implementations. 

And here’s what’s almost funny, we’re going to see the very same mistakes made 
for email. Everyone involved in BIMI [1] asserts that it has nothing to do with 
security - it’s all about marketing. Yet almost everything in regards to 
benefits and execution is security related. There about to make all the same 
silly mistakes over again. 

https://bimigroup.org 

Regards,

- Paul


> 
> 
> 
> On Thu, Oct 24, 2019 at 12:29 PM Eric Mill  wrote:
> 
>> Phillip, that was an unprofessional contribution to this list, that could
>> be read as a legal threat, and could contribute to suppressing dialogue
>> within this community. And, given that the employee to which it is clear
>> you are referring is not only a respected community member, but literally a
>> peer of the Mozilla Root Program, it is particularly unhelpful to Mozilla's
>> basic operations.
>> 
>> On Wed, Oct 23, 2019 at 10:33 AM Phillip Hallam-Baker via
>> dev-security-policy  wrote:
>> 
>>> On Tue, Oct 22, 2019 at 7:49 PM Matt Palmer via dev-security-policy <
>>> dev-security-policy@lists.mozilla.org> wrote:
>>> 
 On Tue, Oct 22, 2019 at 03:35:52PM -0700, Kirk Hall via
 dev-security-policy wrote:
> I also have a question for Mozilla on the removal of 

Re: Firefox removes UI for site identity

2019-10-23 Thread Paul Walsh via dev-security-policy
On Oct 22, 2019, at 4:49 PM, Matt Palmer via dev-security-policy 
 wrote:
> 
> On Tue, Oct 22, 2019 at 03:35:52PM -0700, Kirk Hall via dev-security-policy 
> wrote:
>> I also have a question for Mozilla on the removal of the EV UI.
> 
> This is a mischaracterisation.  The EV UI has not been removed, it has been
> moved to a new location.

[PW] Technically, I think you are both correct Matt. Please allow me to provide 
an analogy to explain why I say "removed" instead of "moved".

If an owner puts up a sign in their store window that says “we have moved to…” 
customers will know they have “moved". But if the owner vacates the premises 
without notice, customers will naturally assume it has closed down (i.e. 
removed). A few might go looking for them. But most won’t. 

I personally use the term “removed” because Mozilla hasn’t actually signposted 
the changes anywhere. The original UI and UX was poor, which is why most people 
don’t know the difference between EV and DV icons. Instead of making it better, 
they made it much worse. 

The team didn’t even include the update in the release notes until I brought it 
to their attention. Even then it’s not in plain English - using the term “EV” 
instead of "website identity” just shows how badly they have always 
communicated the meaning of the UI to consumers. But what’s the point in 
debating that. The horse has bolted. 

Mozilla did however, take great care in educating users about the new tracking 
features and new UI. This only helps to demonstrate that it’s possible to 
educate users about a new feature or UI implementation for identity. But again, 
I digress. So we’ll just keep this as a receipt to prove that browser vendors 
believe it’s possible to train users to look for new visual indicators - 
contrary to what they say about identity information. 

> 
>> So my question to Mozilla is, why did Mozilla post this as a subject on
>> the mozilla.dev.security.policy list if it didn't plan to interact with
>> members of the community who took the time to post responses?
> 
> What leads you to believe that Mozilla didn't plan to interact with members
> of the community?  It is entirely plausible that if any useful responses
> that warranted interaction were made, interaction would have occurred.
> 
> I don't believe that Mozilla is obliged to respond to people who have
> nothing useful to contribute, and who don't accurately describe the change
> being made.

[PW] I agree and disagree. I agree, because Mozilla is not obliged to do 
anything it doesn’t want to do. It’s not obliged to engage with the community. 
It’s not obliged to engage with anyone it doesn’t want to. 

I disagree because no company, especially an open source, community driven 
foundation, should make changes that upset important stakeholders. Aside from 
the bad karma, it is poor product management. Perhaps the lack of community 
engagement in recent times is part of the reason for losing market share? Who 
knows. Either way it can be made better. I personally love the brand and what 
it stands for.

> 
>> This issue started with a posting by Mozilla on August 12, but despite 237
>> subsequent postings from many members of the Mozilla community, I don't
>> think Mozilla staff ever responded to anything or anyone - not to explain
>> or justify the decision, not to argue.  Just silence.
> 
> I think the decision was explained and justified in the initial
> announcement.  No information that contradicted the provided justification
> was presented, so I don't see what argument was required.

[PW] This is not a good way to build a product. I and many others called 
Mozilla out for making poor decisions around it’s OS and mobile browser 
strategies (lack of). So it’s possible for browser vendors to get big things 
very wrong. 

> 
>> In the future, if Mozilla has already made up its mind and is not
>> interested in hearing back from the community, it might be better NOT to
>> start a discussion on the list soliciting feedback.
> 
> Soliciting feedback and hearing back from the community does not require
> response from Mozilla, merely reading.  Do you have any evidence that
> Mozilla staff did not, in fact, read the feedback that was given?

[PW] If true, this is no longer the Mozilla that my team contributed to. As one 
of the first 50 contributors to Mozilla, my COO helped to build the Firefox 
developer evangelist community and he built spreadfirefox .com - my engineers 
contributed to Firefox code too. I don’t ever recall witnessing anyone use the 
words you chose to describe how the team should behave. Perhaps your words 
reflect current thinking… 

- Paul

> 
> - Matt
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org

Re: Firefox removes UI for site identity

2019-10-22 Thread Paul Walsh via dev-security-policy
Thanks Johann. Much appreciated. Would you be kind enough to email me a screen 
shot to save me the trouble of installing an older version and then waiting for 
an update? :)

Thanks,
- Paul


> On Oct 22, 2019, at 1:29 PM, Johann Hofmann  wrote:
> 
> Hi Paul,
> 
> thanks for the heads up. This wasn't intentional and I've reached out to get 
> the security UI changes added to the release notes for 70. You're right that 
> this is significant enough to be included. The page should be updated very 
> soon, so that most users will see the new version (due to throttled rollouts 
> and a general delay in users updating).
> 
> Cheers,
> 
> Johann
> 
> On Tue, Oct 22, 2019 at 9:06 PM Paul Walsh via dev-security-policy 
>  <mailto:dev-security-policy@lists.mozilla.org>> wrote:
> Directly question for Mozilla. 
> 
> Today, the website identity UI was removed from Firefox. “We" new it was 
> coming. But millions of users didn’t. 
> 
> Why wasn’t this mentioned in the release notes on the page that’s 
> automatically opened following the update? 
> 
> Someone might say “they didn’t know it was there anyway”. While this is true 
> for the vast majority, it doesn’t answer my question. And it’s not 100% 
> accurate for every user of Firefox. 
> 
> It’s significant enough to warrant being mentioned in my opinion. And a blog 
> post doesn’t count. 
> 
> Thanks,
> - Paul
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org 
> <mailto:dev-security-policy@lists.mozilla.org>
> https://lists.mozilla.org/listinfo/dev-security-policy 
> <https://lists.mozilla.org/listinfo/dev-security-policy>

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Firefox removes UI for site identity

2019-10-22 Thread Paul Walsh via dev-security-policy
Directly question for Mozilla. 

Today, the website identity UI was removed from Firefox. “We" new it was 
coming. But millions of users didn’t. 

Why wasn’t this mentioned in the release notes on the page that’s automatically 
opened following the update? 

Someone might say “they didn’t know it was there anyway”. While this is true 
for the vast majority, it doesn’t answer my question. And it’s not 100% 
accurate for every user of Firefox. 

It’s significant enough to warrant being mentioned in my opinion. And a blog 
post doesn’t count. 

Thanks,
- Paul

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Germany's cyber-security agency [BSI] recommends Firefox as most secure browser

2019-10-18 Thread Paul Walsh via dev-security-policy
On Oct 18, 2019, at 6:39 PM, Peter Bowen  wrote:
> 
> 
>> On Fri, Oct 18, 2019 at 6:31 PM Peter Gutmann via dev-security-policy 
>>  wrote:
> 
>> Paul Walsh via dev-security-policy  
>> writes:
>> 
>> >I have no evidence to prove what I’m about to say, but I *suspect* that the
>> >people at BSI specified “EV” over the use of other terms because of the
>> >consumer-visible UI associated with EV (I might be wrong).
>> 
>> Except that, just like your claims about Mozilla, they never did that, they
>> just give a checklist of cert types, DV, OV, and EV.  If there was a Mother-
>> validated cert type, the list would no doubt have included MV as well.
> 
> I think this is even easier. Kirk linked the article which links to the 
> actual requirements at 
> https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Mindeststandards/Mindeststandard_Sichere_Web-Browser_V2_0.pdf
> 
> In section SW.2.1.01, it says "Zertifikate mit domainbasierter Validierung 
> (Domain-Validated-Zertrifikate, DV), mit organisationsbasierter Validierung 
> (Organizational-Validated-Zertifikate, OV) sowie Zertifikate mit erweiterter 
> Prüfung (Extended-Validation-Zertifikate) MÜSSEN unterstützt werden".
> 
> Bing Microsoft Translator says the English translation is "Certificates with 
> domain-based validation (domain-validated certrifikate, DV), with
> organization-based validation (Organizational-Validated Certificates, OV) as 
> well as certificates with Extended Validation Certificates MUST be supported"
> 
> This appears to be the only reference to EV in the requirements.  Given the 
> discussion has been around moving the UI treatment of EV to match OV (versus 
> having a distinct EV-only UI treatment, I don't think there is likely to be 
> any impact on the BSI conformance results.

[PW] *Fact* - none of us know. So let’s find out. 

Assuming to know what a customer / stakeholder thinks is a rookie mistake. The 
BSI is a major “implementation” and for that reason, I hope Mozilla offer an 
opinion and to learn more. it’s a great opportunity to find out what their 
perception is. 

This forum is like an unhealthy religious cult where people aren’t open to 
being wrong about anything. Can we try to find common ground - such as our 
desire to help make the web safer. 

- Paul
> 
> Thanks,
> Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Germany's cyber-security agency [BSI] recommends Firefox as most secure browser

2019-10-18 Thread Paul Walsh via dev-security-policy
On Oct 18, 2019, at 6:31 PM, Peter Gutmann  wrote:
> 
> Paul Walsh via dev-security-policy  
> writes:
> 
>> I have no evidence to prove what I’m about to say, but I *suspect* that the
>> people at BSI specified “EV” over the use of other terms because of the
>> consumer-visible UI associated with EV (I might be wrong).
> 
> Except that, just like your claims about Mozilla, they never did that, they
> just give a checklist of cert types, DV, OV, and EV.  If there was a Mother-
> validated cert type, the list would no doubt have included MV as well.
> 
> In fact if you're going to go to sheep's-entrails levels of interpretation,
> they place EV last on their list, and it's phrased more as an afterthought
> than the first two ("must support DV, OV, and also EV").
> 
> You're really grasping at straws here...

[PW] Rather than comment on me, perhaps you could indulge us with your 
interpretation. At least I’m open to being wrong. Are you?

Since it does the same thing as DV in regards to encryption, why do you think 
they specified EV?

- Paul

> 
> Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Germany's cyber-security agency [BSI] recommends Firefox as most secure browser

2019-10-18 Thread Paul Walsh via dev-security-policy

> On Oct 18, 2019, at 7:55 AM, scott.helme--- via dev-security-policy 
>  wrote:
> 
> 
>> I hope the Mozilla community will celebrate this honor, but will also 
>> reconsider its proposal to drop support for EV certificates – that would 
>> mean that Firefox no longer meets all BSI requirements for a secure browser.
> 
> Hey Kirk,
> 
> Can you link to where Mozilla (or any other browser vendor) has stated their 
> intention to drop support for EV certificates? Unless you're confusing the 
> recent/upcoming UI changes surrounding EV, I've seen no such intention from 
> the browsers. EV certificates will continue to work just as OV and DV 
> certificates will.

[PW] I think everyone is right on this one.

I have no evidence to prove what I’m about to say, but I *suspect* that the 
people at BSI specified “EV” over the use of other terms because of the 
consumer-visible UI associated with EV (I might be wrong). 

If I’m right, they might get upset with the removal of the UI. Either way, this 
conversation helps to demonstrate to us, how an important stakeholder is using 
these terms to make important decisions. It also demonstrates how we are making 
too many assumptions about such important matters. 

I think this is a great opportunity from a product perspective, to learn more 
about BSI’s expectations and assumptions to help all of us, with the work we’re 
doing on their behalf. 

I think it’s absolutely critical for Mozilla to reach out to BSI to find out 
the right answers. But I think other stakeholders should do the same.

- Paul

> 
> Kind regards, 
> 
> Scott.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-10-16 Thread Paul Walsh via dev-security-policy
On Oct 14, 2019, at 12:07 PM, Ronald Crane via dev-security-policy 
 wrote:
> 
> The finding is from public information that is relevant to the current value 
> of EV certificates, which is a central part of this discussion.

[PW] For the record, we didn't purchase an EV cert because the browser UI and 
UX was (and still is in Firefox) so terrible that almost no end-user could tell 
when a website owner had their identity verified or not. It doesn’t take a 
product person or designer or a user experience expert to see this.

Had the browsers implemented good UI/UX you would see an EV cert for our 
corporate website. 

If Mozilla implements meaningful UI/UX in the future, I’ll immediately switch 
to whatever it is that Firefox uses to read identity information. If that’s EV, 
great. If it’s something else, great. As long as the price is right.

- Paul

> 
> -R
> 
> On 10/14/2019 11:10 AM, Paul Walsh via dev-security-policy wrote:
>> I have two questions Ronald:
>> 
>> 1. What should I look for? I just see a DV cert from Let’s Encrypt.
>> 
>> 2. Why did you message the entire community about whatever it is you’ve 
>> found?
>> 
>> Thanks,
>> Paul
>> 
>> Sent from my iPhone
>> 
>>> On Oct 12, 2019, at 11:04 AM, Ronald Crane via dev-security-policy 
>>>  wrote:
>>> 
>>> Just FYI, metacert.com served up this cert recently: 
>>> https://crt.sh/?id=1884181370 .
>>> 
>>> -R
>>> 
>>> ___
>>> dev-security-policy mailing list
>>> dev-security-policy@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-security-policy
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-10-16 Thread Paul Walsh via dev-security-policy
> On Oct 14, 2019, at 12:07 PM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> The finding is from public information that is relevant to the current value 
> of EV certificates, which is a central part of this discussion.

[PW] I’m still confused Ronald. And, sorry for taking so long to respond. I 
moved to Vancouver recently and it was Thanksgiving / long weekend.

I’m not sure I understand why you're pointing out that MetaCert uses a Let’s 
Encrypt DV cert. 

Our use of Let’s Encrypt and/or a DV certificate doesn't prove or disprove 
anything that I have said about the need for new browser UI for website 
identity to help make the web safer. If anything, it should help to demonstrate 
that I’m impartial to the CA/Browser battles as an unbiased commentator - I 
literally didn’t want to chose a CA because I knew people would look to see who 
we used. 

Please try to see the good in what I say and do - I have no ulterior motives or 
hidden agendas. If you disagree with something I do or say please say so, and 
let’s debate that. 

And while I have your attention, I would like to point out that I believe 
encryption is vital. HTTPS is vital. SSL certs are vital. I love that DV certs 
can be free. None of these opinions mean that the problems I talk about in my 
thesis aren’t real. It’s ok to like a thing, while trying discussing the 
problems that are introduced by that thing. Lifting out a single point I make, 
can take what I mean out of context - just like removing a single word or 
adding an Oxford comma can change the meaning of a sentence. 

It would be strange for me not to support encryption or DV certs. DV and EV use 
the same technology. EV just happens to have ownership identity info for 
browsers to display to end-users. 

I rarely use the term “EV" because I believe website identity is bigger and 
wider. And who’s to the say tech and/or methodology behind the tech doesn’t 
change. The term “EV” seems to upset so many people because they can’t see 
beyond their hate for CAs. This is immature. This discussion shouldn’t be EV vs 
DV. 

I’m motivated by the longterm possibilities of decentralizing the decision 
making process for URL classification.

Back to my thesis about the need for new and better browser UI for website 
identity to help make the web safer, was there something that you disagreed 
with Ronald? 
https://casecurity.org/2019/10/10/the-insecure-elephant-in-the-room/ 
<https://casecurity.org/2019/10/10/the-insecure-elephant-in-the-room/>

Thanks,
- Paul

> 
> -R
> 
> On 10/14/2019 11:10 AM, Paul Walsh via dev-security-policy wrote:
>> I have two questions Ronald:
>> 
>> 1. What should I look for? I just see a DV cert from Let’s Encrypt.
>> 
>> 2. Why did you message the entire community about whatever it is you’ve 
>> found?
>> 
>> Thanks,
>> Paul
>> 
>> Sent from my iPhone
>> 
>>> On Oct 12, 2019, at 11:04 AM, Ronald Crane via dev-security-policy 
>>>  wrote:
>>> 
>>> Just FYI, metacert.com served up this cert recently: 
>>> https://crt.sh/?id=1884181370 .
>>> 
>>> -R
>>> 
>>> ___
>>> dev-security-policy mailing list
>>> dev-security-policy@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-security-policy
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-10-14 Thread Paul Walsh via dev-security-policy
I have two questions Ronald:

1. What should I look for? I just see a DV cert from Let’s Encrypt. 

2. Why did you message the entire community about whatever it is you’ve found?

Thanks,
Paul

Sent from my iPhone

> On Oct 12, 2019, at 11:04 AM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> Just FYI, metacert.com served up this cert recently: 
> https://crt.sh/?id=1884181370 .
> 
> -R
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-10-11 Thread Paul Walsh via dev-security-policy
Everything I have ever said on this thread can now be found in one article:

https://casecurity.org/2019/10/10/the-insecure-elephant-in-the-room/

This was by invitation of the CA Security Council a few months ago.

I have never worked for a CA and I have never had any reason to say anything in 
favor of CA’s or EV certificates. This is important to say because some people 
will automatically assume that I’m on one side of this debate. 

A few people asked me off-list for my “white paper”. I don’t have one. But this 
has more than 5,000 words and will likely be turned into one - if I can find 
someone to clean it up and make it better. 

Thanks,
Paul

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-10-11 Thread Paul Walsh via dev-security-policy
I’ve replied for the record even though you say this is your last post on this 
particular thread, or to me. I’m good with that as I don’t think you care about 
what anything anyone says outside the browser vendor world anyway. 

> On Oct 9, 2019, at 5:09 PM, Ryan Sleevi  wrote:
> 
> 
> 
> On Wed, Oct 9, 2019 at 7:17 PM Paul Walsh  > wrote:
> We can all agree that almost no user knows the difference between a site with 
> a DV cert and a site with an EV cert. I personally came to that conclusion 
> years ago. I wanted data, so I asked more than 3,000 people. Almost everyone 
> assumed the padlock represents identity/safety. 
> 
> If you read the research linked, you'll find it specifically addresses this, 
> and points to the fact that positive security indicators lead to inaccurate 
> conclusions like this, whether EV or DV, and thus important to remove them to 
> assure folks do not make attribution errors.

[PW] Our data with 85,000 active users proves otherwise. And you’ve done 
nothing to counter anything I actually said. I can’t find any product screen 
shots that users used when researching ***new*** visual indicators for website 
identity. Research on EV/DV is completely meaningless as we’ve already 
established that browser implementation was poor at best.

> 
> Since the you later glibly joke about taking your e-mails as a post and 
> publishing as a PDF and calling it peer reviewed, I think it's reasonable to 
> conclude you didn't actually read the links, nor look at the publication 
> venues, or that you don't recognize and/or respect them as leading scientific 
> and academic conferences at the forefront of the space and industry you 
> participate in.

Generally, I question everything that Google has to say about anything in 
relation to privacy and security - so there is a lack of respect in regards to 
motives. But that doesn’t mean I don’t read them. And it doesn’t mean I don’t 
have friends who work for Google. My comment about me PDF’ing something was me 
making fun of myself - it had nothing to do with my opinions about other 
people’s work. I’m not an academic. I never even went to university, so my 
writing skills aren’t as good as anyone who actually writes papers like the 
ones you referenced. What I write is based on real world use cases - not 
theory-based research. They are very different and both are important. 

>  
> I can cite research for search annotation through a browser add-on (2007). It 
> was formally endorsed by the W3C Semantic Web Education and Outreach Program 
> as one of the most compelling implementations of the Semantic Web. But it’s 
> out of date and doesn’t answer the right questions. But here it is 
> https://www.w3.org/2001/sw/sweo/public/UseCases/Segala/ 
> 
> 
> Thanks. This helps confirm that it was not peer reviewed research, but a 
> marketing case study, completed in 2007.

[PW] This one in particular was reviewed in great detail by all of the W3C 
Semantic Web Education & Outreach participants. They reviewed and voted that it 
was “one of the most compelling implementations of the Semantic Web”. Segala 
and its work was then used as a use case for the W3C POWDER initiative, which 
formally replaced PICS in 2009. But it’s all very old and out of date - that 
was the point I was actually trying to make. 

https://www.w3.org/blog/SWEO/ 
>  
>> Do you have any peer-reviewed data to support your beliefs? It seemed like 
>> the only data shared was from vendors marketing solutions in this space, 
>> although perhaps it was overlooked.
> 
> [PW] Perhaps you did overlook it - hard to say as you didn’t reply to the 
> thread that contained the data. 
> 
> Apologies, the volume of the mail you write, versus the new data provided, 
> makes it difficult. Since I must have missed it, it might be useful to 
> provide specific links to your message, or better yet, specific links to the 
> data that has undergone the similar level of scientific rigor and 
> well-respected peer-review.

[PW]  You’re right about my messages - but it’s because people are asking me 
the same questions that have already been answered - but rather than say that, 
I prefer to reply. If you want “peer-review” evidence please stop referencing 
Google and reference companies that aren’t biased and who actually have other 
stakeholders and users best interests in mind. Unless there were non-Google 
employees involved, in which case I would apologize. 

>  
> [PW] Why haven’t you provided any insight to suggest why I’m wrong, instead 
> of asserting that I haven’t provided evidence to back up my assertions? 
> 
> https://twitter.com/krelnik/status/472046082135162881 
> 
> 
> If you review the links provided, you will see that the claims made in your 
> prior e-mail are directly, and easily, contradicted, as being both 

Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-09 Thread Paul Walsh via dev-security-policy


> On Oct 9, 2019, at 4:21 PM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> On 10/9/2019 3:17 PM, Paul Walsh wrote:
>>> On Oct 9, 2019, at 3:06 PM, Ronald Crane via dev-security-policy 
>>>  wrote:
>>> 
>>> On 10/9/2019 2:24 PM, Paul Walsh via dev-security-policy wrote:
>>> it indefinitely.
>>>> [PW] Here’s the kink Ronald. I agree with you. Mozilla’s decision to 
>>>> implement DoH is going to make everything much worse for the security 
>>>> world - it’s insanely bad 
>>> This is off-topic.
>> [PW] I agree. But no more off-topic than URL shortening services, security 
>> systems for phishing detection and other aspects of this conversation. So 
>> let’s go back to talking about Firefox UI/UX for website identity.
> 
> Huh? I understand the conversation to be about phishing, and URL-shortening 
> services are relevant to phishing. DoH is not relevant to phishing unless 
> it's spectacularly-incorrectly designed.

[PW] My point is this - a massive number of security solutions will become 
useless thanks to DoH. I don’t know if they can all be updated to work, but I 
know it will require a lot of work. Browser changes have a massive impact on 
other stakeholders who are also responsible for keeping society safe from harm 
while using the internet. 

> 
>>>> Incumbent security systems do help to provide a “dent” in the problem. But 
>>>> the dent isn’t good enough as per my previous commentary.
>>>> 
>>>> As far as I can tell, absolutely nothing different has been tested in the 
>>>> past 10 years (sure, AI and other fancy words have been added, but not 
>>>> really helping much). Attacks and breaches are increasing, not decreasing.
>>>> 
>>>> If Firefox had a new separate icon for website identity it would be the 
>>>> single biggest improvement to internet safety we’ve seen in the past 10 
>>>> years - way bigger than encryption - in my opinion - I don’t have data to 
>>>> substantiate that particular assertion.
>>> Since a foundation-supported whitelist would work without much user 
>>> training or intervention, I'd suspect it'd work better than any UI. But 
>>> that, also, is supposition.
>> [PW] If you don’t have any UI, how does the whitelist work? According to 
>> your theory the existing system works.
> 
> I was a little imprecise. I meant that there would be no positive indicator. 
> The UI would be similar to the scare-screen that FF uses to warn about cert 
> problems. Users wouldn't see it unless they tried to visit a site not listed 
> in the whitelist. Thus it would require less user training and intervention 
> than a positive indicator. It would not be invulnerable, of course. Phishers 
> would attempt to train users to ignore it. But they would do that for any 
> positive indicator, as well. The scare-screen has the advantage of being 
> in-your-face scary, so I intuit (no data to support this idea) that it'd be 
> more effective than any positive indicator.

[PW] I love this concept Ronald. We haven’t yet launched a mobile solution 
designed with this in mind because we haven’t classified enough domains, 
sub-domains and social media accounts. We’ve verified more than the total 
number of EV certs on the market, but it’s still not even close enough to 
execute your UX recommendation. Most people would get too upset with that UX. I 
love it, but it’s not quite there yet. It would likely work better for 
enterprises as it’s a more flexible version of URI-based “Zero Trust”. 

It is possible to train people to look for a new indicator - in the same way 
you can train them to use a new feature or icon for other things. 


> 
>> Whitelist = EV certificates.
> 
> No "=". EV certs would be one input of several. Many authentic domains 
> lacking EV certs are well-known and would be included in the whitelist. Any 
> domain already on a well-run blacklist (e.g., GSB) would *not* be included in 
> the whitelist. Any domain that appears to be phishing would be submitted for 
> verification to the appropriate contact(s) of the domain(s) that it appeared 
> to be phishing (e.g., "microsftpaypal.com" would get submitted to both MS and 
> Paypal). Pending verification, it would not be added to the whitelist. The 
> whole thing would be run by a well-trusted foundation (e.g., Mozilla) so 
> there's no need for complex reputation-based distributed review systems, 
> cryptotokens, etc.

Yes I like all of this. My intention was to tease out, the fact that we are in 
agreement on more than one might have first concluded. You see, I think most 
people assume I’m a “CA lover” when I talk about website ident

Re: [FORGED] Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-09 Thread Paul Walsh via dev-security-policy

> On Oct 9, 2019, at 4:19 PM, Peter Gutmann  wrote:
> 
> Paul Walsh via dev-security-policy  
> writes:
> 
>> The data suggests that automatically issued DV certs for free is a favorite
>> for criminals.
> 
> True, but that one's just an instance of Sutton's Law, they go for those
> because they're the least effort.  I was at a talk yesterday by a pen-tester
> who talked about phishing CEOs and the like and a throwaway comment he used at
> one point was "we got a cert [for their phishing site] from Let's Encrypt". It
> was completely casual, just a built-in part of the process, because the years
> of training people to look for the padlock/ green bar/dancing unicorns means
> that that's what the bad guys do to make the phish look more convincing. If
> Let's Encrypt didn't exist, the phrase would have been "we bought a cheap cert
> from GoDaddy".  If browsers only allowed EV certs, it would have been "we
> bought an EV cert through a shell corporation" or "... from an underground
> market”.

I don’t disagree with any of this. But you’re responding to one point I made. 
When it’s taken in context of other points it has more meaning. I’ll even add 
to it Peter to further your point and my support for it… If every mainstream 
browser implemented a great icon for website identity and almost all consumers 
relied on it, the risk of EV certs being obtained by threat actors would 
increase. This is why I have also said that I think CAs would need to “tighten 
up their belts” when it comes to their processes for verification and 
revocation. 

My point still stands in regards to the need for new UI for website identity 
because everyone relies on the lock on dangerous websites. 

Right now, there is *NO* bar for criminals - zero. We’ve done the opposite to 
what we could have done with email spam decades ago - charge such a small 
amount to send an email that it would increase the cost of spam. Not saying it 
would work but you get my point. 

> 
> Point is, once you've got some universally-recognised signalling mechanism
> that a site is OK, it'll be used by the bad guys to make their attacks totally
> convincing, whether it's DV certs, EV certs, free certs, expensive certs, or
> whatever.

I agree. But also Peter, there are new blockchain-based solutions for “KYC” 
(know your customer) that can be used in conjunction with existing processes, 
couple with a few additional techniques. I’d do this if I were running a CA 
that charges for website identity, but I don’t. 

> 
>> I can’t add any more evidence to prove that something needs to be done about
>> Let’s Encrypt as an entire initiative is an overall failure in my opinion.
> 
> It's actually been phenomenally successful.  Browsers won't allow you to
> encrypt a connection without a certificate, and Let's Encrypt enables that. It
> hands out magic tokens to turn on encryption in browsers, nothing more,
> nothing less, and it's been very successful at that.

Perhaps you can comment on the cost of this greatness? I’ve cited a lot of 
stats that suggest everyone has a friend, colleague or family member who has 
been  victim of an attack of some kind, either directly or indirectly thanks to 
Let’s Encrypt SSL certificates - notwithstanding everything we agree on above. 
I have personally spoken to people who have lost their entire lifes savings in 
a phishing attack because they relied on the padlock - which almost certainly 
was issued by Let’s Encrypt - who could have had checks to reduce the risk - 
not necessarily completely mitigate it. 

Right now they do absolutely nothing in the same way 4chan and 8chan were 
amazing for freedom of speech but... That’s not ok. We all have an obligation 
to try to reduce the risk of our technology being used for bad. Now we’re down 
the rabbit hole of another topic - but it is important to discuss as it gets to 
the heart of the lack of research by anyone who is advocating for HTTPS 
EVERYWHERE and the negative impact it’s having thanks to the lack of UI for 
website identity. 

- Paul


> 
> Peter.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-10-09 Thread Paul Walsh via dev-security-policy
I’m sorry for the follow up message - I know we all get too many notifications 
already. But I forgot to add that I was the founder and CEO of Segala - the 
company referenced on the W3C website that I referred to below.

Sorry about that.

Paul



> On Oct 9, 2019, at 4:17 PM, Paul Walsh  wrote:
> 
> 
> 
>> On Oct 9, 2019, at 3:23 PM, Ryan Sleevi > <mailto:r...@sleevi.com>> wrote:
>> 
>> 
>> 
>> On Wed, Oct 9, 2019 at 6:06 PM Paul Walsh via dev-security-policy 
>> > <mailto:dev-security-policy@lists.mozilla.org>> wrote:
>> I believe an alternative icon to the encryption lock would make a massive 
>> difference to combating the security threats that involve dangerous links 
>> and websites. I provided data to back up my beliefs. 
>> 
>> Here's peer-reviewed data, in top-tier venues, that shows the beliefs are 
>> unfounded:
>> https://ai.google/research/pubs/pub48199 
>> <https://ai.google/research/pubs/pub48199>
>> https://ai.google/research/pubs/pub45366 
>> <https://ai.google/research/pubs/pub45366>
> 
> I don’t disagree with this research. But it’s the wrong research Ryan, asking 
> the wrong questions. You haven’t explained why any of my research draws the 
> wrong conclusions, but I’ll explain why Google's is fundamentally wrong. 
> 
> Perhaps you can do the same for me when you get time. 
> 
> We can all agree that almost no user knows the difference between a site with 
> a DV cert and a site with an EV cert. I personally came to that conclusion 
> years ago. I wanted data, so I asked more than 3,000 people. Almost everyone 
> assumed the padlock represents identity/safety. 
> 
> I can cite research for search annotation through a browser add-on (2007). It 
> was formally endorsed by the W3C Semantic Web Education and Outreach Program 
> as one of the most compelling implementations of the Semantic Web. But it’s 
> out of date and doesn’t answer the right questions. But here it is 
> https://www.w3.org/2001/sw/sweo/public/UseCases/Segala/ 
> <https://www.w3.org/2001/sw/sweo/public/UseCases/Segala/>
>> 
>> Do you have any peer-reviewed data to support your beliefs? It seemed like 
>> the only data shared was from vendors marketing solutions in this space, 
>> although perhaps it was overlooked.
> 
> [PW] Perhaps you did overlook it - hard to say as you didn’t reply to the 
> thread that contained the data. 
> 
> The research to which you refer, is from a vendor’s marketing solution. 
> Google is the vendor and Chrome is the marketing solution. This is no 
> different to MetaCert asking 85k power users. 
> 
> We had absolutely no reason to lie to ourselves or to skew opinions for this 
> conversation. 
> 
> We sell security services while verifying domains for free. We needed to do 
> the research to find out if we had a solution to a problem. In theory, we are 
> putting CAs out of business. And if all browsers implemented better UI for 
> website identity, it would put our flagship solution out of business. If I 
> convinced you that you are wrong and I’m right, I’d have more to lose than I 
> would to gain. Right now I’m putting industry and people’s safety ahead of my 
> shareholders. I’m completely impartial to browser vendor vs CA debates on all 
> fronts.
> 
>>  
>> The reverse-proxy phishing technique bypasses Google’s own Safe Browser API 
>> inside their own WebView while their own users sign into Google pages while 
>> using Google’s Authenticator for 2FA. So their answer? In June 2019 they 
>> banned users from signing into Google’s pages while using mobile apps with a 
>> WebView. This tells you what you need to know about Safe Browser API - 
>> finally I have the evidence to prove that it’s an “ok” solution at best. 
>> Most security companies still think it’s great - because they’re not in 
>> possession of all the facts. 
>> 
>> While I suspect I'll regret replying to this message, since so much of it is 
>> off-topic for this discussion Forum, I do want to point out the attribution 
>> error being made with correlation versus causation. You're making a specific 
>> conclusion about why WebView-based sign-ins were banned, without any 
>> supporting data, along with factually-suspect statements that are 
>> unsupported.
> 
> [PW] Why haven’t you provided any insight to suggest why I’m wrong, instead 
> of asserting that I haven’t provided evidence to back up my assertions? 
> 
> But because you asked so nicely:
> 
> The following was published by Jonathan Skelker, Product Manager, Account 
> Security Google April 2019
> 
> “[snip]… one form of phishing, known as “man 

Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-09 Thread Paul Walsh via dev-security-policy

> On Oct 9, 2019, at 10:42 AM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> On 10/2/2019 3:50 PM, Paul Walsh via dev-security-policy wrote:
> 
> [snip]
>>>> sɑlesforce[.com] is available for purchase right now.
>>> I was going to suggest banning non-Latin-glyph domains, since they are yet 
>>> another useful phishing weapon. FF converts all such domains into Punycode 
>>> when typed or pasted into the address bar, though the conversion is 
>>> displayed below the address bar, not in it. So your example becomes 
>>> "http://xn--slesforce-51d.com/;.
>> Just providing an example of a URL that uses .com. I can provide more 
>> without using special characters to demonstrate the same point.
> 
> Well, I'm sure that many domains containing "salesforce" presently are 
> unregistered, e.g., "salesforcecorp.com". This fact supports the idea that 
> internet entities should make a concerted effort to clean up their namespaces 
> as I noted previously. Of course, that should be one among many other 
> approaches to reducing phishing….

[PW] I agree. 

> 
> Elsewhere in this thread I proposed a foundation-run *whitelist* of authentic 
> domains that browsers could use to warn users about potential phishing sites 
> (e.g., "paypal.com" is in the whitelist, but the ~20,000 other nonauthentic 
> domains containing "paypal" are not). This approach would reduce the need for 
> users to examine domains to determine authenticity. What's your view on it?

[PW] I agree. And such lists exist already. At MetaCert we aggregate all open 
source lists that are available. We have our own community with a few thousand 
members who report and validate suspicious links every day, while also 
submitting and validating links that should be verified as safe. These all go 
into one database and served with an API that also covers 3,500+ shortening 
services. So, call the API and get a response in 270ms. But this is not good 
enough...

We eradicated phishing for the crypto world on Slack with a security 
integration in Q4 2017 - it was rampant beyond belief. As soon as a phishing 
attack was discovered, reviewed, validated and classified, messages that 
contained those links in other communities, would be auto deleted from other 
Slack. There were times when we classified scams in less than 2 minutes. We 
even have software with machine-learning capabilities listening to the Twitter 
firehose - it detects signals attributed to scams, follows the thread, finds 
the URL or digital wallet address, detects and classifies. 

BUT, we came to learn that no matter how fast we get, no matter how much tech 
and people we throw at the problem, there will always be victims. It is 
technically impossible to detect every new dangerous URL or website. 

My team and I have even written a white paper, technical paper and mathematical 
equations for a crypto token to incentivize the decentralization of the 
decision making process. This took about 18 months.

All of this, and our R into visual indicators and URL classification dating 
back to 2004 helped us to conclude that chasing after threats just isn’t 
effective enough. I would argue that we have built the most advance URL-based 
threat intelligence system by an order of magnitude as it can also classify 
folders on sites like GitHub in a way others can't - but I’m losing faith and 
conviction in the entire threat model. 

It’s so much easier to tell someone what’s safe, than it is to detect what’s 
dangerous. 

So, I agree with you Ronald - your suggestion is a great one. But I’m afraid it 
doesn’t solve the problem in the same way that website identity does - as I 
described previously. This is not a popular belief - I never seem to pick 
things that are easy. 

- Paul


> 
> -R
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-09 Thread Paul Walsh via dev-security-policy
On Oct 9, 2019, at 7:30 AM, Leo Grove via dev-security-policy 
 wrote:
> 
> On Tuesday, October 8, 2019 at 10:36:19 PM UTC-5, Matt Palmer wrote:
>> On Tue, Oct 08, 2019 at 07:16:59PM -0700, Paul Walsh via dev-security-policy 
>> wrote:
>>> Why isn’t anyone’s head blowing up over the Let’s Encrypt stats?
>> 
>> Because those stats don't show anything worth blowing up ones head over.  I
>> don't see anything in them that indicates that those 14,000 certificates --
>> or even one certificate, for that matter --was issued without validating
>> control over the domain name(s) indicated in the certificates.

[PW] Here are some facts from the cybersecurity world - which is completely 
impartial to browser vendors and CAs as well as “HTTPS Everywhere” advocates. 
Security companies only care about the safety and wellbeing of people who use 
the internet - it doesn’t care about personal and emotional feelings in this 
political debate. I have already provided links to every source in previous 
emails. 

93% of breaches start with phishing
Phishing increased by 250% in 2018
Phishing URLs outnumber malicious attachments five to one
91% of all phishing sites have a DV cert
95% of phishing sites with a DV cert come from Let’s Encrypt
Phishing is growing at the same rate as that of Let’s Encrypt
Phishing can only get worse

The data suggests that automatically issued DV certs for free is a favorite for 
criminals. This is 100% because consumers who are aware of the padlock, 
automatically rely on it for trust and assume they can trust the owner.

***This means most breaches happen because people look at the padlock and fall 
for Let’s Encrypt encryption.*** This should at least, stop everyone in their 
tracks to ask, “How can we encrypt the web while not making it less safe?” If 
not, your motives are all wrong in my opinion. 

If all of this doesn’t paint a bleak picture I can’t add any more evidence to 
prove that something needs to be done about Let’s Encrypt as an entire 
initiative is an overall failure in my opinion. While it’s helping to scale 
encryption for a more “private” web, it’s an existential treat to internet 
safety. 

There is a solution to this problem:

Make website identity better so consumers stop relying on the padlock for 
trust. I’ve provided data to demonstrate how this can work. I haven’t received 
any opposing views on my data/findings. And there’s no research to prove 
otherwise. 

If consumers stopped looking at the padlock all of the above would go away - 
until threat actors find another vector. It would introduce other issues to 
address - for example, CAs would need to really tighten up all of their 
processes because threat actors might buy an EV cert if the cost is worth it. 

>> 
> 
> Validation compliance is not the topic of this thread. Stripe Inc was able to 
> get their EV certificate in a compliant way after all. It sounds like since 
> 14k DV certs were issued to phishing sites in a compliant way, everything is 
> a-ok?
> 
> What are your thoughts if those 14k certs were EV? 
> 
>> EV and DV serve different purposes, and while DV is more-or-less solving the
>> problem it sets out to solve, the credible evidence presented shows that EV
>> does not solve any problem that browsers are interested in.
>> 
>>> If people think “EV is broken” they must think DV is stuck in hell with
>>> broken legs.
>> 
>> Alternately, people realise that EV and DV serve different purposes through
>> different methods, and thus cannot be compared in the trivial and flippant
>> way you suggest.
>> 
>> - Matt
> 
> You've mentioned "EV and DV serve different purposes" twice and I think that 
> is misleading. EV requires DV validation as well, and they both serve to 
> authenticate and encrypt. However, EV goes beyond authenticating only the 
> domain name which is where DV stops. EV attempts to bind the domain name to 
> an actual owner. 
> 
> People deploying EV expect to get DV and something more. When the browsers 
> stop displaying the EV UI, it will be indistinguishable from DV on cursory 
> glance. To me, this shows EV and DV serve similar purposes, but EV attempts 
> to go further in the context of authentication.

[PW] Bravo - great reminder. If people dislike the cost, process or timing, 
they should debate those things separately. I personally believe the entire EV 
process can be massively improved with very specific tools and methodologies - 
but that’s for another conversation. This is about Mozilla removing UI instead 
of making it better.

- Paul

> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Paul Walsh via dev-security-policy

> On Oct 2, 2019, at 1:16 PM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> On 10/1/2019 6:56 PM, Paul Walsh via dev-security-policy wrote:
>> New tools such as Modlishka now automate phishing attacks, making it 
>> virtually impossible for any browser or security solution to detect -  
>> bypassing 2FA. Google has admitted that it’s unable to detect these phishing 
>> scams as they use a phishing domain but instead of a fake website, they use 
>> the legitimate website to steal credentials, including 2FA. This is why 
>> Google banned its users from signing into its own websites via mobile apps 
>> with a WebView. If Google can prevent these attacks, Mozilla can’t.
> 
> I understand that Modlishka emplaces the phishing site as a MITM. This is yet 
> another reason for browser publishers to help train their users to use only 
> authentic domain names, and also to up their game on detecting and banning 
> phishing domains. I don't think it says much about the value, or lack 
> thereof, of EV certs. As has been cited repeatedly in this thread, most 
> phishing sites don't even bother to use SSL, indicating that most users who 
> can be phished aren't verifying the correct domain.

[PW] Ronald, I don’t believe better detection and prevention is the answer for 
anti-phishing - but not trying isn’t an option, obviously. With billions of 
dollars being invested in this area, and with hundreds of millions changing 
hands through M every year, the problem is getting worse. Every week we read 
about yet another security company with anti-phishing [insert fancy words 
here]. It’s ain’t work’n. 

I believe I demonstrated in a previous message, with data and techniques, why 
it’s impossible for any company to detect every phishing URL or website. 

And I’m afraid you’re incorrect about SSL certs. According to Webroot, over 93% 
of all new phishing sites use an SSL certificate. And according to MetaCert 
it’s over 95%.

And of those with a DV cert, over 95% come from Let’s Encrypt - because they’re 
automatically issued for free and they have a near-zero policy for detection, 
prevention or cert revocation. This is why over 14,000 SSL certs were issued by 
Let’s Encrypt for domains with PayPal in it - so if you believe in better 
detection and prevention, why don’t you/we request this of Let’s Encrypt? 

Why isn’t anyone’s head blowing up over the Let’s Encrypt stats? If people 
think “EV is broken” they must think DV is stuck in hell with broken legs.

It’s impossible to properly verify the domain by looking at it - you need to 
carry out other checks. It’s simply not solving the problem. 

I provided data and insight to how website identity UI can work - I’d really 
love to hear counterarguments around that, or agreement that it’s useful. 

- Paul

> 
> -R
> 
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Paul Walsh via dev-security-policy
> On Oct 2, 2019, at 3:41 PM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> On 10/2/2019 3:00 PM, Paul Walsh via dev-security-policy wrote:
>> On Oct 2, 2019, at 2:52 PM, Ronald Crane via dev-security-policy 
>>  wrote:
> [snip]
>>> Some other changes that might help reduce phishing are:
>>> 1. Site owners should avoid using multiple domains, because using them 
>>> habituates users to the idea that there are several valid domains for a 
>>> given entity. Once users have that idea, phishers are most of the way to 
>>> success. Some of the biggest names in, e.g., brokerage services are 
>>> offenders on this front.
>> [PW] Companies like Google own so many domains and sub-domains that it’s 
>> difficult to stay ahead of them. I think this is an unrealistic expectation. 
>> So if other browser vendors have the same opinion, they should look inward.
> It is not unrealistic to expect, e.g., Blahblah Investments, SIPC, to use 
> only "www.blahblahinvestments.com" for everything related to its retail 
> investment services. It *is* unreasonable to habituate users to bad practices.

[PW] I hear you Ronald. And I agree. My point was that it’s unrealistic for us 
to expect this pattern of domain use to change. I can’t see how any stakeholder 
can force or encourage organizations to use a single domain name or even a 
small number of them for a given purpose. So there’s little point in directing 
energy to something we can’t change.


>>> 2. Site owners should not use URL-shortening services, for the same reason 
>>> as (1).
>> Site owners using shortened URLs isn’t the problem in my opinion. Even if 
>> shortened URLs went away, phishing wouldn’t stop. Unless you have research 
>> to provides more insight?
> Where did I say that phishing would "stop" if URL shortening services 
> disappeared? I said avoiding them would be helpful, since it would reinforce 
> the idea that there is one correct domain per entity, or at least per entity 
> service. Probably all the entity services should be subdomains of the one 
> correct domain, but alas it will take a sustained security campaign and a 
> decade to make a dent in that problem.

[PW] I apologize if I gave the impression that you were saying something that 
you were not. That wasn’t my intention. We can try to encourage companies to 
stop using shortening services, but we’re not likely to have much of an impact. 
People who don’t belong to a brand or organization will continue to use 
shortening services too. 

I have some ideas for shortening services. They can implement better trust. 
Example: a URL that belongs to a site with website identity verified, could 
look like https://verified.tinyurl.com/345kss or they could direct to a TinyURL 
webpage where it informs the user of the verified destination.


>>> 3. Site owners should not use QR codes, since fake ones are perfect for 
>>> phishing.
>> Same as above. You don’t need to mask URLs to have a successful phishing 
>> campaign.
> No, you don't "need" to do it. It is, however, a very useful weapon in 
> phishers' quivers.

[PW] I totally agree. I’d like to add, of the hundred million apps with a 
WebView, many don’t display the URL at all. We also have Google’s AMP project 
which does little to help. And then we also have social media cards and 
previews where it’s possible to trick the system by displaying the og metadata 
from the real website while linking to the malicious destination. Rabbit hole…

>> sɑlesforce[.com] is available for purchase right now.
> 
> I was going to suggest banning non-Latin-glyph domains, since they are yet 
> another useful phishing weapon. FF converts all such domains into Punycode 
> when typed or pasted into the address bar, though the conversion is displayed 
> below the address bar, not in it. So your example becomes 
> "http://xn--slesforce-51d.com/“.
> 
>> 
>>> 4. Browser publishers should petition ICANN to revoke most of the gTLDs it 
>>> has approved, since they provide fertile ground for phishing.
>> Petitioning them won’t work. gTLDs are here to stay, even if we dislike 
>> them. Also, most phishing sites use .com and other well known TLDs. I’m not 
>> saying gTLDs aren’t used, they are. But they’re not needed.
> Of course they're not "needed" for phishing. They are, however, useful for 
> phishing.
>> So, bringing it back to Mozilla. I’d still love to see recent research/data 
>> to back up Mozilla’s decision to remove identity UI in Firefox. By promoting 
>> the padlock without education about phishing, browser vendors are actually 
>> making the web more dangerous.
> 
> I also would like to see more research.

- Paul

&g

Re: [FORGED] Website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Paul Walsh via dev-security-policy

> On Oct 2, 2019, at 3:52 PM, Peter Gutmann  wrote:
> 
> Paul Walsh ​ writes:
> 
>> I would like to see one research paper published by one browser vendor to
>> show that website identity visual indicators can not work.
> 
> Uhhh... are you serious with that request?  You're asking for a study from a
> browser vendor, a group who in any case don't publish research papers but
> write browsers, indicating that their own UI doesn't work?

[PW] I see where you are coming from Peter. I wouldn’t expect any browser 
vendor to provide studies or evidence to explain why they’re implementing 
features. And separately, I wouldn’t expect Google to provide anything to 
anyone for any reason, because they pretty much do what they do for profit. 
Chrome dev is directed by advertising dollars, not by privacy or user safety. 

However, I'd love to think that the Mozilla team still care about the developer 
community and end users more than they care about profit [1] or following other 
browser vendors. Firefox isn’t the “leader” it was, but I still love the brand 
and cause.  

I’m sure you don’t need to be reminded that Mozilla is a foundation, but I 
personally wanted to remind myself of their core values. So with this in mind, 
I’d like to think that the team would stop and rethink decisions that have a 
massive impact on stakeholders and end-users. And when asked for some 
supporting evidence, they wouldn’t fall silent but engage in a meaningful 
debate.  

It has been a long time since my team or I were involved in any way, so this 
might have changed. 

[1] https://www.mozilla.org/en-US/about/ 
> 
>> I’d love you to show me the type of research I’ve asked for. I’m open to
>> learning more. I’m not new to this game. I worked on integrated browsers and
>> search engines in the 90’s at AOL.
> 
> If it's OK to cite peer-reviewed papers from universities published at
> conferences and in journals, I can dig up a few of those.

[PW] If you ever do find the time to dig them out, please do. No pressure.

- Paul

> 
> Peter.
> 
> 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Paul Walsh via dev-security-policy

> On Oct 2, 2019, at 4:05 PM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> On 10/2/2019 3:27 PM, Peter Gutmann wrote:
>> Ronald Crane via dev-security-policy  
>> writes:
>> 
>>> "Virtually impossible"? "Anyone"? Really? Those are big claims that need 
>>> real
>>> data.
>> How many references to research papers would you like?  Would a dozen do, or
>> do you want two dozen?
> One well-done paper would do.
>> I'm pretty sure I haven't been phished yet.
>> How would you know?
> 
> Since most phishing appears to be financial, I would expect unauthorized 
> withdrawals from financial accounts, unauthorized credit card charges, 
> unordered packages showing up, dunning notices from the IRS because I filed 
> my tax returns with a phisher, etc. I haven't observed these indicia of 
> getting phished.

[PW] I agree that financial is a good incentive. But it’s by no means the only 
incentive. 

According to Verizon, 93% of data breaches start with phishing - to steal 
credentials. 

Here’s what happens:

Marriott Starwood Hotels, Aadhar, Exactis, MyFitnessPal and Quora were breached 
last year.
Over 2 billion records were compromised.

Most people changed their password on the site that was compromised.
Most people use the same password for many services.
Most people didn’t change their credentials on sites that weren’t compromised.
Threat actor searches a one or more databases for a company or person and buys 
their credentials. Or just buys them in bulk.
Threat actor tries the person’s credentials on internal systems or services 
with sensitive information.
Another company is comprised.
Loop.

While the media talks about hacking and breaches and other cool “cyber” terms, 
what they’re not saying, is that social engineering is at the core of many of 
these attacks. Social engineering is cheaper, quicker and easier than trying to 
find computer or network based vulnerabilities. 

The latter does happen and there are many amazing security professionals 
building systems to detect and prevent those types of attacks. I’m not one of 
them because I’m not smart enough to address those weaknesses. 

> 
>> And how does this help the other 7.53 billion people who
>> will be targets for phishers?
> Alas it doesn't. We do need better phishing prevention. Do you have a 
> suggestion?

[PW] While phishing detection and prevention is improving all the time, it will 
never be good enough. It’s much easier for a user to know that PayPal.com is 
who they think it is based on a visual indicator, than it is to detect the 
14,000 PayPal phishing sites with a Let’s Encrypt DV certificate. 

Yes, I just went there :)

- Paul


>>> In any case, have we ever really tried to teach users to use the correct
>>> domain?
>> Yes, we've tried that.  And that.  And that too.  And the other thing.  Yes,
>> that too.
>> 
>> None of them work.
> 
> Please cite the best study you know about on this topic (BTW, I am *not* 
> snidely implying that there isn't one).
> 
> -R
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Updated website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Paul Walsh via dev-security-policy
I finally got around to digesting the email below. Summary/Reminder: CA related 
data on website identity from the perspective of website owners. 

As Homer Simpson said, "70% of all reports are made up”. So, everything put 
forward by me in previous messages, or anyone else, must be taken with a pinch 
of salt. That said, data does give meaning to personal opinions. Without data, 
we’re left with just opinions.

If we set the data aside for a second, we all know (fingers crossed) that 
opening the wrong link and signing into the wrong website, is something that 
people either worry about, or should be worried about. 

I pitched a company last week. The Director of Threat Intelligence for a 
multi-billion dollar security company in Silicon Valley thought he’d prove that 
he couldn't be caught out. I wasn’t testing the room, but he jumped in and said 
"#10 is the real domain". He was wrong (unfortunately because I felt bad) - it 
was a fake. I had to explain how it wasn’t a reflection on his expertise but 
rather, an emotional state of mind at a given point in time under specific 
circumstances. What the eyes can’t see, the brain fills in [1].

This subject is so important I would love Mozilla to consider implementing a 
beta program. I’d proudly contribute. 

Here’s something we did at MetaCert, that Mozilla could do - auto classify 
regulated TLDs and gTLDs. For example, you could light up the visual indicator 
for URLs on .GOV domains - without any need for third-party interaction. This 
would make it virtually impossible for anyone to fall for a phishing scam when 
filing taxes - for example. Perhaps it would encourage the DNC (and GOP) to 
only use .GOV domains and avoid being hacked by Russians in the future. These 
are just a few use cases where there’s a potential for massive real world 
benefit.

Rather than remove website identity based on the response to poor design 
implementation, we should consider making it better. I believe website owners 
would be more likely to seek verification if they can really protect their 
brand online. And consumers would proactively look for it. 

Website identity won’t ever be perfect, but with new technologies and 
methodologies that have come out in the past 18 months, so much more can also 
be achieved by CAs and other providers, to tighten up the verification process, 
while making it faster and lower cost for customers.

[1] https://www.gla.ac.uk/news/archiveofnews/2011/april/headline_194655_en.html 


- Paul




> On Oct 2, 2019, at 5:12 PM, Kirk Hall via dev-security-policy 
>  wrote:
> 
> On September 21, I sent a message to the Mozilla community with the results 
> of a survey of all of Entrust Datacard’s customers (both those who use EV 
> certificates, and those who don’t) concerning what they think about website 
> identity in browsers, browser UIs in general, and EV browser UIs in 
> particular. [1]  The data we published was based on 504 results collected 
> over two days (a pretty good response).
> 
> The survey was distributed in a way that each customer could only respond 
> once.  We left the survey open, and can now publish updated results from a 
> combined total of 804 separate certificate customers (300 more than last 
> time).  The results mirror the results we first reported two weeks ago – and 
> based on Paul Walsh’s data on when survey results should be considered 
> statistically significant [2], this means that the updated survey results are 
> very solid.
> 
> Here is a summary of the updated respondent results for the six questions 
> listed below.
> 
> (1) 97% of respondents agreed or strongly agreed with the statement: 
> "Customers / users have the right to know which organization is running a 
> website if the website asks the user to provide sensitive data."  (This is 
> the same result as for the prior sample.)
> 
> (2) 94% of respondents agreed or strongly agreed with the statement “Identity 
> on the Internet is becoming increasingly important over time.”  (This is 1% 
> higher than in the prior sample.)
> 
> (3) When respondents were asked “How important is it that your website has an 
> SSL certificate that tells customers they are at your company's official 
> website via a unique and consistent UI in the URL bar?” 76% said it was 
> either extremely important or very important to them. Another 13% said it was 
> somewhat important (total: 89%).  (This is 2% higher than in the prior 
> sample.)
> 
> (4) When respondents were asked “Do you believe that positive visual signals 
> in the browser UI (such as the EV UI for EV sites) are important to encourage 
> website owners to choose EV certificates and undergo the EV validation 
> process for their organization?” 72% said it was either extremely important 
> or very important to them.  (This is down 1% from the prior sample.) Another 
> 18% said it was somewhat important.  (This is up 1% from the 

Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Paul Walsh via dev-security-policy

> On Oct 8, 2019, at 12:44 PM, Ryan Sleevi  wrote:
> 
> Paul,

[snip]

> It does not seem you're interested in finding solutions for the issues,

[PW] You are mixing things up Ryan. I am interested in finding solution to 
issues. I specifically kept my message on point, which was your tone and 
approach to communication - this is equally important to the content you put 
forward. My point was made and you obviously didn’t receive it well - I’m ok 
with that. Most people don’t respond well to criticism. 

I will only contribute proposed solutions for issues where I posses deep domain 
expertise - moderating and chairing standards and best practices is one area, 
hence my contribution.

> and you've continued to shift your message, so perhaps it might be better to 
> continue that discussion elsewhere?

[PW] In my opinion, this is the right place. You don’t get to dictate where and 
when. The alternative would be to walk into a broom cupboard and scream at the 
wall. 

I won’t comment on this matter any further as I think we’ve labored the subject 
and I don’t want to take up people’s time any further. 

- Paul


> 
> Thanks.
> 
> On Tue, Oct 8, 2019 at 3:21 PM Paul Walsh  > wrote:
> Ryan,
> 
> You just proved me right by saying I’m confused because I hold an opinion 
> about how you conduct yourself when collaborating with industry stakeholders. 
> My observations are the same across the board. I don’t think I’m confused. 
> But you’re welcome to disagree with me. And, it’s not off-topic. We should be 
> respectful when communicating in forums like this. I think your communication 
> is sometimes disrespectful. 
> 
> You also tell people they are confused about bylaws and other documents when 
> they’re in disagreement with you. It’s possible for someone to fully 
> understand and appreciate specific guidelines and disagree with you at the 
> same time.
> 
> I’ve contributed to many W3C specifications over the years - I co-founded 
> two, including the Mobile Web Initiative. I was also Chair of BIMA.co.uk 
>  for three years. My point is this, when contributing to 
> industry initiatives, I learned that there will always be instances where 
> individuals need to be reminded to show respect to others when communicating 
> differences of opinion - especially when there is a strong chance of culture 
> differences. I don’t mind being reminded from time to time. Nobody is perfect.
> 
> You can take this feedback, or leave it. Your call. 
> 
> - Paul
> 
> 
> 
> 
>> On Oct 8, 2019, at 12:09 PM, Ryan Sleevi > > wrote:
>> 
>> 
>> 
>> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh > > wrote:
>> Dear Ryan,
>> 
>> It would help a great deal, if you tone down your constant insults towards 
>> the entire CA world. Questioning whether you should trust any CA is a bridge 
>> too far. 
>> 
>> Instead, why don’t you try to focus on specific issues with specific CAs, or 
>> specific issues with most CAs. I don’t think you have a specific issue with 
>> every CA in the world. 
>> 
>> If specific CAs fail to do what you think is appropriate for browser 
>> vendors, perhaps you need to implement new, or improve existing audits? 
>> Propose solutions, implement checks and execute better reviews. Then iterate 
>> until everyone gets it right. 
>> 
>> Paul,
>> 
>> I appreciate your response, even if I believe it's largely off-topic, deeply 
>> confused, and personally insulting.
>> 
>> This thread is acknowledging there are systemic issues, that it's not with 
>> specific CAs, and that the solutions being put forward aren't working, and 
>> so we need better solutions. It's also being willing to acknowledge that if 
>> we can't find systemic fixes, it may be that we have a broken system, and we 
>> should not be afraid of looking to improve or replace the system.
>> 
>> Perhaps you (incorrectly) read "CAs" to mean "Every CA in the world", when 
>> it's just a plurality of "more than one CA". That's a bias on the reader's 
>> part, and suggesting that every plurality be accompanied by a qualified 
>> ("Some", "most") is just tone policing rather than engaging on substance.
>> 
>> That said, it's entirely inappropriate to chastise me for highlighting 
>> issues of non-compliance, and attempt to identify the systemic issue 
>> underneath it. It's also entirely inappropriate to insist that I personally 
>> solve the issue, especially when significant effort has been expended to do 
>> address these issues so far, which continue to fail without much explanation 
>> as to why they're failing. Suggesting that we should accept regular failures 
>> and just deal with it, unfortunately, has no place in reasonable or rational 
>> conversation about how to improve things. That's because such a position is 
>> not interested in finding solutions, or improving, but in accepting the 
>> status quo.
>> 
>> If you have suggestions on why these systemic 

Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Paul Walsh via dev-security-policy

> On Oct 8, 2019, at 12:51 PM, Matthew Hardeman  wrote:
> 
> 
> On Tue, Oct 8, 2019 at 2:10 PM Ryan Sleevi via dev-security-policy 
>  > wrote:
> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh  > wrote:
> 
> so we need better solutions. It's also being willing to acknowledge that if
> we can't find systemic fixes, it may be that we have a broken system, and
> we should not be afraid of looking to improve or replace the system.
> 
> Communication styles aside, I believe there's merit to far more serious 
> community consideration of the notion that either the system overall or the 
> standard for expectations of the system's performance are literally broken.  
> There's probably a better forum for that discussion than this thread, but I 
> echo that I believe the notion has serious merit.

[PW] It looks like I said those words above, but I didn’t :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Paul Walsh via dev-security-policy
Ryan,

You just proved me right by saying I’m confused because I hold an opinion about 
how you conduct yourself when collaborating with industry stakeholders. My 
observations are the same across the board. I don’t think I’m confused. But 
you’re welcome to disagree with me. And, it’s not off-topic. We should be 
respectful when communicating in forums like this. I think your communication 
is sometimes disrespectful. 

You also tell people they are confused about bylaws and other documents when 
they’re in disagreement with you. It’s possible for someone to fully understand 
and appreciate specific guidelines and disagree with you at the same time.

I’ve contributed to many W3C specifications over the years - I co-founded two, 
including the Mobile Web Initiative. I was also Chair of BIMA.co.uk for three 
years. My point is this, when contributing to industry initiatives, I learned 
that there will always be instances where individuals need to be reminded to 
show respect to others when communicating differences of opinion - especially 
when there is a strong chance of culture differences. I don’t mind being 
reminded from time to time. Nobody is perfect.

You can take this feedback, or leave it. Your call. 

- Paul




> On Oct 8, 2019, at 12:09 PM, Ryan Sleevi  wrote:
> 
> 
> 
> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh  > wrote:
> Dear Ryan,
> 
> It would help a great deal, if you tone down your constant insults towards 
> the entire CA world. Questioning whether you should trust any CA is a bridge 
> too far. 
> 
> Instead, why don’t you try to focus on specific issues with specific CAs, or 
> specific issues with most CAs. I don’t think you have a specific issue with 
> every CA in the world. 
> 
> If specific CAs fail to do what you think is appropriate for browser vendors, 
> perhaps you need to implement new, or improve existing audits? Propose 
> solutions, implement checks and execute better reviews. Then iterate until 
> everyone gets it right. 
> 
> Paul,
> 
> I appreciate your response, even if I believe it's largely off-topic, deeply 
> confused, and personally insulting.
> 
> This thread is acknowledging there are systemic issues, that it's not with 
> specific CAs, and that the solutions being put forward aren't working, and so 
> we need better solutions. It's also being willing to acknowledge that if we 
> can't find systemic fixes, it may be that we have a broken system, and we 
> should not be afraid of looking to improve or replace the system.
> 
> Perhaps you (incorrectly) read "CAs" to mean "Every CA in the world", when 
> it's just a plurality of "more than one CA". That's a bias on the reader's 
> part, and suggesting that every plurality be accompanied by a qualified 
> ("Some", "most") is just tone policing rather than engaging on substance.
> 
> That said, it's entirely inappropriate to chastise me for highlighting issues 
> of non-compliance, and attempt to identify the systemic issue underneath it. 
> It's also entirely inappropriate to insist that I personally solve the issue, 
> especially when significant effort has been expended to do address these 
> issues so far, which continue to fail without much explanation as to why 
> they're failing. Suggesting that we should accept regular failures and just 
> deal with it, unfortunately, has no place in reasonable or rational 
> conversation about how to improve things. That's because such a position is 
> not interested in finding solutions, or improving, but in accepting the 
> status quo.
> 
> If you have suggestions on why these systemic issues are still happening, 
> despite years of effort to improve them, I welcome them. However, there's no 
> place for reasonable discussion if you don't believe we should have open and 
> frank conversations about issues, about the misaligned incentives, or about 
> how existing efforts to prevent these incidents by Browsers are falling flat.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Paul Walsh via dev-security-policy
I read Jeremy’s last response before posting my comment. 

Dear Ryan,

It would help a great deal, if you tone down your constant insults towards the 
entire CA world. Questioning whether you should trust any CA is a bridge too 
far.

Instead, why don’t you try to focus on specific issues with specific CAs, or 
specific issues with most CAs. I don’t think you have a specific issue with 
every CA in the world.

If specific CAs fail to do what you think is appropriate for browser vendors, 
perhaps you need to implement new, or improve existing audits? Propose 
solutions, implement checks and execute better reviews. Then iterate until 
everyone gets it right. 

I could write a book on how Google is the least “trustworthy” browser vendor on 
the planet. I could write another book about how Google is constantly 
contradicting its own advice and best practices. One example is where Google 
tells us to focus on the part of the URL that matters most - the domain name. 
But over here we have AMP, where URLs go to die a slow painful death within 
Google’s closed system, adding no value to the world outside of advertising. 
The list is endless when it comes to the lack of respect for people’s privacy 
from *some* browser vendors. Not all browsers are evil. Not all CAs are evil.

So, please can you get off your high horse and stick to facts and propose 
solutions instead of constantly making personal insults and bringing up 
problems without implementing new processes to address same. 

Can we just keep in mind that we’re all trying to do our job. No company is 
perfect. No process is perfect. No technology solution is perfect. 

Peace!

- Paul

p.s. I don’t work for a CA and never have. And I believe there are many 
weaknesses that could can should be better addressed.



> On Oct 7, 2019, at 5:45 PM, Ryan Sleevi via dev-security-policy 
>  wrote:
> 
> On Mon, Oct 7, 2019 at 7:06 PM Jeremy Rowley 
> wrote:
> 
>> Interesting. I can't tell with the Netlock certificate, but the other
>> three non-EKU intermediates look like replacements for intermediates that
>> were issued before the policy date and then reissued after the compliance
>> date.  The industry has established that renewal and new issuance are
>> identical (source?), but we know some CAs treat these as different
>> instances.
> 
> 
> Source: Literally every time a CA tries to use it as an excuse? :)
> 
> My question is how we move past “CAs provide excuses”, and at what point
> the same excuses fall flat?
> 
> While that's not an excuse, I can see why a CA could have issues with a
>> renewal compared to new issuance as changing the profile may break the
>> underlying CA.
> 
> 
> That was Quovadis’s explanation, although with no detail to support that it
> would break something, simply that they don’t review the things they sign.
> Yes, I’m frustrated that CAs continue to struggle with anything that is not
> entirely supervised. What’s the point of trusting a CA then?
> 
> However, there's probably something better than "trust" vs. "distrust" or
>> "revoke" v "non-revoke", especially when it comes to an intermediate.  I
>> guess the question is what is the primary goal for Mozilla? Protect users?
>> Enforce compliance?  They are not mutually exclusive objectives of course,
>> but the primary drive may influence how to treat issuing CA non-compliance
>> vs. end-entity compliance.
> 
> 
> I think a minimum goal is to ensure the CAs they trust are competent and
> take their job seriously, fully aware of the risk they pose. I am more
> concerned about issues like this which CAs like QuoVadis acknowledges they
> would not cause.
> 
> The suggestion of a spectrum of responses fundamentally suggests root
> stores should eat the risk caused by CAs flagrant violations. I want to
> understand why browsers should continue to be left holding the bag, and why
> every effort at compliance seems to fall on how much the browsers push.
> 
> Of the four, only Quovadis has responded to the incident with real
>> information, and none of them have filed the required format or given
>> sufficient information. Is it too early to say what happens before there is
>> more information about what went wrong? Key ceremonies are, unfortunately,
>> very manual beasts. You can automate a lot of it with scripting tools, but
>> the process of taking a key out, performing a ceremony, and putting things
>> a way is not automated due to the off-line root and FIPS 140-3
>> requirements.
> 
> 
> Yes, I think it’s appropriate to defer discussing what should happen to
> these specific CAs. However, I don’t think it’s too early to begin to try
> and understand why it continues to be so easy to find massive amounts of
> misissuance, and why policies that are clearly communicated and require
> affirmative consent is something CAs are still messing up. It suggests
> trying to improve things by strengthening requirements isn’t helping as
> much as needed, and perhaps more consistent distrusting is a 

Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-02 Thread Paul Walsh via dev-security-policy
> On Oct 2, 2019, at 3:41 PM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> On 10/2/2019 3:00 PM, Paul Walsh via dev-security-policy wrote:
>> On Oct 2, 2019, at 2:52 PM, Ronald Crane via dev-security-policy 
>>  wrote:
> [snip]
>>> Some other changes that might help reduce phishing are:
>>> 1. Site owners should avoid using multiple domains, because using them 
>>> habituates users to the idea that there are several valid domains for a 
>>> given entity. Once users have that idea, phishers are most of the way to 
>>> success. Some of the biggest names in, e.g., brokerage services are 
>>> offenders on this front.
>> [PW] Companies like Google own so many domains and sub-domains that it’s 
>> difficult to stay ahead of them. I think this is an unrealistic expectation. 
>> So if other browser vendors have the same opinion, they should look inward.
> It is not unrealistic to expect, e.g., Blahblah Investments, SIPC, to use 
> only "www.blahblahinvestments.com" for everything related to its retail 
> investment services. It *is* unreasonable to habituate users to bad practices.

I agree. 

>>> 2. Site owners should not use URL-shortening services, for the same reason 
>>> as (1).
>> Site owners using shortened URLs isn’t the problem in my opinion. Even if 
>> shortened URLs went away, phishing wouldn’t stop. Unless you have research 
>> to provides more insight?
> Where did I say that phishing would "stop" if URL shortening services 
> disappeared? I said avoiding them would be helpful, since it would reinforce 
> the idea that there is one correct domain per entity, or at least per entity 
> service. Probably all the entity services should be subdomains of the one 
> correct domain, but alas it will take a sustained security campaign and a 
> decade to make a dent in that problem.

I agree. I said, if they disappeared it wouldn’t stop phishing. So it’s still a 
problem. I wanted to use an extreme example to demonstrate a point. 


>>> 3. Site owners should not use QR codes, since fake ones are perfect for 
>>> phishing.
>> Same as above. You don’t need to mask URLs to have a successful phishing 
>> campaign.
> No, you don't "need" to do it. It is, however, a very useful weapon in 
> phishers' quivers.

I agree.

>> sɑlesforce[.com] is available for purchase right now.
> 
> I was going to suggest banning non-Latin-glyph domains, since they are yet 
> another useful phishing weapon. FF converts all such domains into Punycode 
> when typed or pasted into the address bar, though the conversion is displayed 
> below the address bar, not in it. So your example becomes 
> "http://xn--slesforce-51d.com/;.

Just providing an example of a URL that uses .com. I can provide more without 
using special characters to demonstrate the same point. 



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-02 Thread Paul Walsh via dev-security-policy

> On Oct 2, 2019, at 3:27 PM, Peter Gutmann via dev-security-policy 
>  wrote:
> 
> Ronald Crane via dev-security-policy  
> writes:
> 
>> "Virtually impossible"? "Anyone"? Really? Those are big claims that need real
>> data.
> 
> How many references to research papers would you like?  Would a dozen do, or
> do you want two dozen?

I would like to see one research paper published by one browser vendor to show 
that website identity visual indicators can not work. 

I’m not asking for an individual who tricked the system to get an EV cert - 
that doesn’t prove anything in relation to visual indicators and the 
effectiveness of well designed UI. It just proves that a specific process of 
getting verified can be improved. 

> 
> (This has been researched to death, it's not rocket science, given a bit of
> time you can dig up vast numbers of references.  The only reason I haven't do
> it for this post is that I get the feeling I'd be wasting said time doing so).

I’ve been working on URL classification since I co-instigated the W3C Standard 
in 2004. Here’s a link to where you can see one of the first browser add-ons we 
built with visual indicators for more context around URLs 
https://www.w3.org/2001/sw/sweo/public/UseCases/Segala/ 
 Segala was the first 
company I founded. 

I’d love you to show me the type of research I’ve asked for. I’m open to 
learning more. I’m not new to this game. I worked on integrated browsers and 
search engines in the 90’s at AOL. 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-02 Thread Paul Walsh via dev-security-policy

> On Oct 2, 2019, at 3:20 PM, Kurt Roeckx  wrote:
> 
> On Wed, Oct 02, 2019 at 03:17:31PM -0700, Paul Walsh wrote:
 In separate research, CAs have shown data to demonstrate that website 
 owners want to have their identity verified. 
>>> 
>>> They have not. In fact, I would say that most website owners are perfectly
>>> happy with DV certificates.
>> 
>> What’s your source of data to substantiate what you “would say”? We need to 
>> start talking about facts and data.
> 
> How many DV, OV and EV certificates are there? I think it's rather
> clear what most website owners use.

Let’s try this another way. 

If there was a well implemented visual indicator that users relied on for 
trust, and ID verification was low cost, fast and accessible, more website 
owners would be more likely to want that visual indicator - especially if their 
customers asked them why they didn’t have it.

We saw this from our research - crypto exchanges and wallets were being 
requested by their customers to seek verification. It can work. 

> 
> 
> Kurt
> 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-02 Thread Paul Walsh via dev-security-policy
> On Oct 2, 2019, at 3:18 PM, Ronald Crane via dev-security-policy 
>  wrote:
> 
> 
> On 10/2/2019 2:47 PM, Paul Walsh via dev-security-policy wrote:
>> On Oct 2, 2019, at 1:16 PM, Ronald Crane via dev-security-policy 
>>  wrote:
>>> On 10/1/2019 6:56 PM, Paul Walsh via dev-security-policy wrote:
>>>> New tools such as Modlishka now automate phishing attacks, making it 
>>>> virtually impossible for any browser or security solution to detect -  
>>>> bypassing 2FA. Google has admitted that it’s unable to detect these 
>>>> phishing scams as they use a phishing domain but instead of a fake 
>>>> website, they use the legitimate website to steal credentials, including 
>>>> 2FA. This is why Google banned its users from signing into its own 
>>>> websites via mobile apps with a WebView. If Google can prevent these 
>>>> attacks, Mozilla can’t.
>>> I understand that Modlishka emplaces the phishing site as a MITM. This is 
>>> yet another reason for browser publishers to help train their users to use 
>>> only authentic domain names, and also to up their game on detecting and 
>>> banning phishing domains. I don't think it says much about the value, or 
>>> lack thereof, of EV certs. As has been cited repeatedly in this thread, 
>>> most phishing sites don't even bother to use SSL, indicating that most 
>>> users who can be phished aren't verifying the correct domain.
>> Ronald - it’s virtually impossible for anyone to spot well designed phishing 
>> attacks. Teaching people to check the URL doesn’t work - I can catch out 99% 
>> with a single test, every time.
> 
> "Virtually impossible"? "Anyone"? Really? Those are big claims that need real 
> data. I'm pretty sure I haven't been phished yet.

Yes :)

I have results from 1,845 people so far. I published the test on Twitter, 
inside our Telegram group and presented it many times at many blockchain 
conferences around the world. Only 4 people got it right - and I also put it in 
front of great security professionals. My point is that it’s virtually 
impossible to spot some phishing scams for almost everyone. I’ve seen some top 
social engineers own up to falling for a phishing test at work on Twitter - and 
this is what they do for a living. It’s not a measurement of experience or 
expertise. 

Here’s one test https://twitter.com/Paul__Walsh/status/1174359874932621316?s=20

> 
> In any case, have we ever really tried to teach users to use the correct 
> domain? As I noted in a recent response, many site owners do things -- such 
> as using multiple domains for a single entity, using URL-shortening services, 
> using QR codes, etc. -- that habituate users to the idea that there's more 
> than one correct domain, and/or that they can get it from untrustworthy 
> sources. Once they have that idea, phishing is easy.

This won’t resolve the problem unfortunately. Companies that use few domains 
are high profile targets. 

> 
>> It’s the solution if users had a reliable way to check website identity as 
>> I’ve explained
> And EV certs do this how? Please address https://stripe.ian.sh .

I already addressed this by asking for a single instance of where an attacker 
used an EV certificate. I provide quite a lot of text around this point - 
pointing out that just because you can prove something can be done, doesn’t 
mean it will be. No security solution on the market is 100%. No company is 
hack-proof. Threat actors will only spend time, energy and cost if it’s worth 
it. 

From a security POV, the bar to attaining an EV cert is too high for it to be a 
real threat. They have to setup a real company and it can only be used once. So 
when the cert is revoked that’s the end of it. But, the process could be 
improved.

Back to my question, can you provide examples of attacks that used an EV cert?

I’m not here to defend EV certs or CAs. I’m here to ask that you stop and 
rethink your decision to remove UI for website identity. This isn’t to say that 
we can’t rethink the CA model and tech. From what I can see, browser vendors 
are railroading everyone, including CAs. There’s no collaboration here. Just a 
few people who *think* they know what’s best. I see no evidence to substantive 
any decisions. 


>> Perhaps you can comment on my data about users who do rely on a new visual 
>> indicator and the success that has seen?
> Please post a link to a paper describing it, including the methodology you 
> used.

I’ve already published the methodology used on a thread in this forum with all 
the data collected in relation to this point. I just haven’t taken the time to 
PDF it and stick it on a website. It will however, be published on a website in 
the form of a guest post - la

Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-02 Thread Paul Walsh via dev-security-policy
> On Oct 2, 2019, at 3:11 PM, Kurt Roeckx  wrote:
> 
> On Wed, Oct 02, 2019 at 02:48:56PM -0700, Paul Walsh wrote:
>> On Oct 2, 2019, at 12:52 AM, Kurt Roeckx via dev-security-policy 
>>  wrote:
>>> 
>>> On 2019-10-02 09:20, Kurt Roeckx wrote:
 On 2019-10-02 02:39, Paul Walsh wrote:
> 
> According to Ellis, the goal for a customer survey is to get feedback 
> from people who had recently experienced "real usage" of the product. The 
> key question in the survey for these people according to Ellis, is:
> 
> "How would you feel if you could no longer rely on MetaCert's green 
> shield?
 No, the question he would be asking is:
 "How would you feel if you could no longer use MetaCert's EV certificates?"
>>> 
>>> And it's probably better to even turn that into:
>>> How would you feel if you could no longer buy MetaCert's EV certificates?
>> 
>> [PW] MetaCert is not a CA. We don’t have any relationships with any CAs 
>> either. 
> 
> Well, for what Ellis is talking about, it's asking about a
> product, and how the user would feel if that product can't be used
> anymore.
> 
> That just shows that there are users that want your product, not
> that everybody wants it.

I’m not sure I understand the point you would like me to take from this. Not 
every person in the world wants to use Firefox. That doesn’t stop Mozilla from 
building browser software. No company in the world tries to sell to everyone. 
If most people want browser UI for website identity, are you saying we 
shouldn’t give it to them because everyone didn’t proactively say they wanted 
it? 


> 
>> Our research was aimed at end-users, as I said previously. We have proof 
>> that users want to use a visual indicator for trust. And we also 
>> demonstrated that it’s possible to protect users with well designed browser 
>> UI/UX.
> 
> Sure, there will be users that want that, nobody is denying that.

Great. Perhaps we can talk about the things that we agree on. 

> 
>> In separate research, CAs have shown data to demonstrate that website owners 
>> want to have their identity verified. 
> 
> They have not. In fact, I would say that most website owners are perfectly
> happy with DV certificates.

What’s your source of data to substantiate what you “would say”? We need to 
start talking about facts and data.

> 
> 
> Kurt
> 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-02 Thread Paul Walsh via dev-security-policy
On Oct 2, 2019, at 2:52 PM, Ronald Crane via dev-security-policy 
 wrote:
> 
> On 10/2/2019 1:16 PM, Ronald Crane via dev-security-policy wrote:
>> On 10/1/2019 6:56 PM, Paul Walsh via dev-security-policy wrote:
>>> New tools such as Modlishka now automate phishing attacks, making it 
>>> virtually impossible for any browser or security solution to detect -  
>>> bypassing 2FA. Google has admitted that it’s unable to detect these 
>>> phishing scams as they use a phishing domain but instead of a fake website, 
>>> they use the legitimate website to steal credentials, including 2FA. This 
>>> is why Google banned its users from signing into its own websites via 
>>> mobile apps with a WebView. If Google can prevent these attacks, Mozilla 
>>> can’t.
>> 
>> I understand that Modlishka emplaces the phishing site as a MITM. This is 
>> yet another reason for browser publishers to help train their users to use 
>> only authentic domain names, and also to up their game on detecting and 
>> banning phishing domains. I don't think it says much about the value, or 
>> lack thereof, of EV certs. As has been cited repeatedly in this thread, most 
>> phishing sites don't even bother to use SSL, indicating that most users who 
>> can be phished aren't verifying the correct domain.
>> 
>> -R
>> 
> Some other changes that might help reduce phishing are:
> 
> 1. Site owners should avoid using multiple domains, because using them 
> habituates users to the idea that there are several valid domains for a given 
> entity. Once users have that idea, phishers are most of the way to success. 
> Some of the biggest names in, e.g., brokerage services are offenders on this 
> front.

[PW] Companies like Google own so many domains and sub-domains that it’s 
difficult to stay ahead of them. I think this is an unrealistic expectation. So 
if other browser vendors have the same opinion, they should look inward.

> 
> 2. Site owners should not use URL-shortening services, for the same reason as 
> (1).

Site owners using shortened URLs isn’t the problem in my opinion. Even if 
shortened URLs went away, phishing wouldn’t stop. Unless you have research to 
provides more insight?

> 
> 3. Site owners should not use QR codes, since fake ones are perfect for 
> phishing.

Same as above. You don’t need to mask URLs to have a successful phishing 
campaign. sɑlesforce[.com] is available for purchase right now. 

> 
> 4. Browser publishers should petition ICANN to revoke most of the gTLDs it 
> has approved, since they provide fertile ground for phishing.

Petitioning them won’t work. gTLDs are here to stay, even if we dislike them. 
Also, most phishing sites use .com and other well known TLDs. I’m not saying 
gTLDs aren’t used, they are. But they’re not needed. 

So, bringing it back to Mozilla. I’d still love to see recent research/data to 
back up Mozilla’s decision to remove identity UI in Firefox. By promoting the 
padlock without education about phishing, browser vendors are actually making 
the web more dangerous. 

- Paul


> There appear to be ~1900 such gTLDs [1]. I doubt that even the largest 
> corporations have registered their base domains under every such gTLD. Where 
> does "www.microsoft.somenamethatICANNmightaddasagTLD" go? I sure don't know 
> where "www.zippenhop.[pick a non-.com gTLD] goes.
> 
> [1]  Search for "delegated" status at 
> https://newgtlds.icann.org/en/program-status/delegated-strings .
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-02 Thread Paul Walsh via dev-security-policy
On Oct 2, 2019, at 12:52 AM, Kurt Roeckx via dev-security-policy 
 wrote:
> 
> On 2019-10-02 09:20, Kurt Roeckx wrote:
>> On 2019-10-02 02:39, Paul Walsh wrote:
>>> 
>>> According to Ellis, the goal for a customer survey is to get feedback from 
>>> people who had recently experienced "real usage" of the product. The key 
>>> question in the survey for these people according to Ellis, is:
>>> 
>>> "How would you feel if you could no longer rely on MetaCert's green shield?
>> No, the question he would be asking is:
>> "How would you feel if you could no longer use MetaCert's EV certificates?"
> 
> And it's probably better to even turn that into:
> How would you feel if you could no longer buy MetaCert's EV certificates?

[PW] MetaCert is not a CA. We don’t have any relationships with any CAs either. 

We do not sell verification services to website owners. We’re doing 
verification at scale, for free. We generate revenue on the other end - selling 
security services as well as API access to the data. Trying to protect users 
from threats is clearly not working. So they need to know what’s verified as 
not-unsafe. I’m afraid to use the word safe too often because it’s vague.

Our research was aimed at end-users, as I said previously. We have proof that 
users want to use a visual indicator for trust. And we also demonstrated that 
it’s possible to protect users with well designed browser UI/UX.

In separate research, CAs have shown data to demonstrate that website owners 
want to have their identity verified. 

I haven’t seen any research / data to show why Mozilla should remove UI instead 
of improving it. 

FWIW my COO and engineers built the official Firefox add-ons for digg, 
Delicious, Yahoo!, eBay, PayPal, Google and Microsoft. And they built and 
maintained spreadfirefox .com - so we have a lot of experience working "with” 
Mozilla. We're blown away by the team’s decision to remove UI without any 
research to back up their decisions.

Was there anything you disagreed with in my lengthy responses Kurt? 

Thanks,
Paul




___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-02 Thread Paul Walsh via dev-security-policy
On Oct 2, 2019, at 1:16 PM, Ronald Crane via dev-security-policy 
 wrote:
> 
> On 10/1/2019 6:56 PM, Paul Walsh via dev-security-policy wrote:
>> New tools such as Modlishka now automate phishing attacks, making it 
>> virtually impossible for any browser or security solution to detect -  
>> bypassing 2FA. Google has admitted that it’s unable to detect these phishing 
>> scams as they use a phishing domain but instead of a fake website, they use 
>> the legitimate website to steal credentials, including 2FA. This is why 
>> Google banned its users from signing into its own websites via mobile apps 
>> with a WebView. If Google can prevent these attacks, Mozilla can’t.
> 
> I understand that Modlishka emplaces the phishing site as a MITM. This is yet 
> another reason for browser publishers to help train their users to use only 
> authentic domain names, and also to up their game on detecting and banning 
> phishing domains. I don't think it says much about the value, or lack 
> thereof, of EV certs. As has been cited repeatedly in this thread, most 
> phishing sites don't even bother to use SSL, indicating that most users who 
> can be phished aren't verifying the correct domain.

Ronald - it’s virtually impossible for anyone to spot well designed phishing 
attacks. Teaching people to check the URL doesn’t work - I can catch out 99% 
with a single test, every time. It’s the solution if users had a reliable way 
to check website identity as I’ve explained. Almost all breaches start with 
phishing and it’s getting worse. 

Perhaps you can comment on my data about users who do rely on a new visual 
indicator and the success that has seen? 

Any opinion I’ve read is just that, opinion, with zero data/evidence to 
substantiate anything cited. The closest I’ve seen is exceptionally old 
research that’s more than 10 years old.

According to Webroot 93% of all new phishing sites have an SSL certificate. 
According to MetaCert it’s more than 96%. This is increasing as Let’s Encrypt 
issues more free certs. I think people are mixing up spam with phishing. Or 
they’re just guessing based on what they see personally. It’s time to reference 
facts from the security world.

With billions of dollars being invested in cybersecurity and many billions 
spent paying for those services, it’s still technically impossible for any 
company with any solution to detect every new malicious URL - and it will never 
be possible to detect every new dangerous URL. 

So, most attacks start with phishing. Most phishing sites have a padlock. Most 
people trust sites with a padlock. Security companies can’t stop all new 
threats.

What’s the answer? It certainly isn’t removing website identity and promoting 
the padlock.

- Paul

> 
> -R
> 
> 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-01 Thread Paul Walsh via dev-security-policy
On Sunday, September 22, 2019 at 7:49:14 AM UTC-7, Gijs Kruitbosch wrote:

[snip]

> On 22/09/2019 00:52, Kirk Hall wrote:
> > (1) *97%* of respondents agreed or strongly agreed with the statement: 
> > "Customers / users have the right to know which organization is running a 
> > website if the website asks the user to provide sensitive data."
> 
> Although I intuitively would like to think that we have a right to know 
> "who is running a website", this doesn't mean that EV certificate 
> information is an appropriate vehicle for this information. Even without 
> all the significant issues that EV certification has, if we pretended it 
> was perfect, it still only shows UI for the tls connection made for the 
> toplevel document, whereas other resources and subframes could easily 
> have (and usually do) come from other domains that either do not have an 
> EV cert or have one belonging to a different entity. And even if that 
> were not the case, the entity controlling the website does not 
> necessarily control the data in a legal sense.*** So the EV UI does not, 
> in the legal sense, always indicate who will control the "sensitive 
> data" that users/customers submit.

[PW] I agree with some of this. When I co-instigated the creation of the W3C 
Standard for URL Classification and Content Labeling that replaced PICS in 
2009, it was for this reason; PICS didn’t support assertions about folders - 
only domains. Furthermore, when I co-founded the W3C Mobile Web Initiative I 
helped to write the first draft of the “mobileOK” specification - the ability 
to make assertions about any part of a URI was also a priority then. So, I 
agree with general observations about the importance of being able to 
distinguish between domains, sub-domains, folders etc. when making assertions 
about the content or content creator. 

However, there’s so much to unbundle and it’s drawing the wrong conclusions. 
Allow me to first paint the problem that browser vendors are making worse with 
their decision to scrap website identity instead of fixing what they got wrong 
with the UI and UX.

According to Verizon, phishing represents 93% of all data breaches.

According to Proofpoint, in the first quarter of 2019, cyberattacks using 
dangerous links outnumbered those with malicious attachments by five to one.

According to the Webroot nearly 1.5 million new phishing sites are created each 
month.

According to Wombat Security 76% of businesses reported being a victim of a 
phishing attack in the last year.

According to IBM, phishing attacks increased 250% in 2018.

According to Palo Alto Networks, 70% of all newly registered domains are 
malicious, suspicious or not safe for work.

New tools such as Modlishka now automate phishing attacks, making it virtually 
impossible for any browser or security solution to detect -  bypassing 2FA. 
Google has admitted that it’s unable to detect these phishing scams as they use 
a phishing domain but instead of a fake website, they use the legitimate 
website to steal credentials, including 2FA. This is why Google banned its 
users from signing into its own websites via mobile apps with a WebView. If 
Google can prevent these attacks, Mozilla can’t. 

What’s the common thread? Almost all the cybersecurity problems we read about, 
start with one user falling for a counterfeit website. 

According to Webroot, 93% of all new phishing domains display a padlock thanks 
to free, automatically issued DV certificates. According to MetaCert and some 
CAs, 98% of DV certs used for phishing were issued for free by Let’s Encrypt. 
Given that Let’s Encrypt's growth is exploding, this problem can only get 
worse. 

If browser vendors designed proper UI and UX for website identity in the first 
place, the web would be a much safer place. CAs are not responsible for browser 
UI. EV certs aren’t responsible for browser UI. Browser vendors designed the UI 
and UX and it’s totally broken. 

People who say website identity is broken, are in fact pointing the finger at 
browser vendors, even if they don’t realize it.

It’s pretty easy to make assertions about different parts of a website. It’s 
not rocket science. If you want to talk about certificate issuance that’s 
broken, look at how Let’s Encrypt has issued more than 14,000 DV certs to 
domains with PayPal in it. But what’s weird, is that the same people who think 
CAs are doing things wrong and EV certs are bad, are the same people who say 
it’s not Let’s Encrypt’s responsibility to fight phishing. Pot, kettle, black. 

And Google doesn’t agree with Google - "keep URLs simple and only show the 
domain name for safety, but allow me to introduce you to AMP”…  where URLs and 
brand identity goes to die. A big company is only as good as the few people who 
work on a given project. I have seen zero data from Mozilla (or Google) when it 
comes to website identity and their decision to remove it from the UI. I can 
dig out many instances of where browser 

Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-01 Thread Paul Walsh via dev-security-policy
On Saturday, September 21, 2019 at 6:19:29 PM UTC-7, Ryan Sleevi wrote:

> On Sat, Sep 21, 2019 at 7:52 PM Kirk Hall via dev-security-policy <
> dev-security-policy@lists.mozilla.org 
> > wrote:
> 
>> To remedy this, Entrust Datacard surveyed all of its TLS/SSL web server
>> certificate customers over three days (19-21 September 2019) concerning
>> website identity in browsers, browser UIs in general, and EV browser UIs in
>> particular.  We have received 504 responses from customers to date, and
>> more responses are still coming in. Respondent company size ranged all the
>> way from 1-99 employees to over 20,000 employees.

[snip]

> 3) Are the numbers Entrust DataCard provided in
> https://cabforum.org/wp-content/uploads/23.-Update-on-London-Protocol.pdf 
> 
> still accurate? That is, do EV certificates account for only 0.48% of the
> certificate population?
> 
> If those numbers are correct, this seems like it's a survey that represents
> a small fraction of Entrust DataCard's customers (unless Entrust DataCard
> only a few thousand customers), which represents a small fraction of
> connections in Mozilla Firefox (approximately 0.3% over a 2 month period),
> regarding certificates that account for only 0.48% of the certificate
> population.
> 
> Is that the correct perspective?

[PW] The following response is to address the questions/comments regarding 
dataset type and size. 

Sean Ellis [1] was the head of marketing at LogMeIn and Uproar from launch to 
IPO. He was the first marketer at Dropbox, Lookout and Xobni, and he coined the 
term "growth hacker" in 2010. So, he knows a thing or two when it comes to 
product/market fit research - including the type of questions to ask, and the 
size of the dataset required to derive a good understanding of the responses.

According to Ellis, the goal for a customer survey is to get feedback from 
people who had recently experienced "real usage" of the product. The key 
question in the survey for these people according to Ellis, is:

"How would you feel if you could no longer rely on MetaCert's green shield?

a) Very disappointed
b) Somewhat disappointed
c) Not disappointed
d) N/A I no longer use the product

According to Ellis, to get an indication of product/market fit, you'll want to 
know the percentage of people who would be "very disappointed" if they could no 
longer use your product. In his experience, it becomes possible to sustainably 
grow a product when it reaches around 40% of users who try it that would be 
"very disappointed" if they could no longer use it.

For this percentage to be meaningful, you need to have a fairly large sample 
size. In Ellis' experience, a minimum of 30 responses is needed before the 
survey becomes directionally useful. At 100+ responses he is much more 
confident in the results. 

Based on Ellis' observations, it would appear that Entrust DataCard's dataset 
is big enough. 

I'm not debating the merits of the research, as I have my own research to prove 
that browser-based visual indicators for website identity does protect 
end-users - but only when designed properly. 

[1] 
https://blog.growthhackers.com/using-product-market-fit-to-drive-sustainable-growth-58e9124ee8db
 


Regards,
Paul



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy