> On Oct 9, 2019, at 3:23 PM, Ryan Sleevi <r...@sleevi.com> wrote:
> 
> 
> 
> On Wed, Oct 9, 2019 at 6:06 PM Paul Walsh via dev-security-policy 
> <dev-security-policy@lists.mozilla.org 
> <mailto:dev-security-policy@lists.mozilla.org>> wrote:
> I believe an alternative icon to the encryption lock would make a massive 
> difference to combating the security threats that involve dangerous links and 
> websites. I provided data to back up my beliefs. 
> 
> Here's peer-reviewed data, in top-tier venues, that shows the beliefs are 
> unfounded:
> https://ai.google/research/pubs/pub48199 
> <https://ai.google/research/pubs/pub48199>
> https://ai.google/research/pubs/pub45366 
> <https://ai.google/research/pubs/pub45366>

I don’t disagree with this research. But it’s the wrong research Ryan, asking 
the wrong questions. You haven’t explained why any of my research draws the 
wrong conclusions, but I’ll explain why Google's is fundamentally wrong. 

Perhaps you can do the same for me when you get time. 

We can all agree that almost no user knows the difference between a site with a 
DV cert and a site with an EV cert. I personally came to that conclusion years 
ago. I wanted data, so I asked more than 3,000 people. Almost everyone assumed 
the padlock represents identity/safety. 

I can cite research for search annotation through a browser add-on (2007). It 
was formally endorsed by the W3C Semantic Web Education and Outreach Program as 
one of the most compelling implementations of the Semantic Web. But it’s out of 
date and doesn’t answer the right questions. But here it is 
https://www.w3.org/2001/sw/sweo/public/UseCases/Segala/ 
<https://www.w3.org/2001/sw/sweo/public/UseCases/Segala/>
> 
> Do you have any peer-reviewed data to support your beliefs? It seemed like 
> the only data shared was from vendors marketing solutions in this space, 
> although perhaps it was overlooked.

[PW] Perhaps you did overlook it - hard to say as you didn’t reply to the 
thread that contained the data. 

The research to which you refer, is from a vendor’s marketing solution. Google 
is the vendor and Chrome is the marketing solution. This is no different to 
MetaCert asking 85k power users. 

We had absolutely no reason to lie to ourselves or to skew opinions for this 
conversation. 

We sell security services while verifying domains for free. We needed to do the 
research to find out if we had a solution to a problem. In theory, we are 
putting CAs out of business. And if all browsers implemented better UI for 
website identity, it would put our flagship solution out of business. If I 
convinced you that you are wrong and I’m right, I’d have more to lose than I 
would to gain. Right now I’m putting industry and people’s safety ahead of my 
shareholders. I’m completely impartial to browser vendor vs CA debates on all 
fronts.

>  
> The reverse-proxy phishing technique bypasses Google’s own Safe Browser API 
> inside their own WebView while their own users sign into Google pages while 
> using Google’s Authenticator for 2FA. So their answer? In June 2019 they 
> banned users from signing into Google’s pages while using mobile apps with a 
> WebView. This tells you what you need to know about Safe Browser API - 
> finally I have the evidence to prove that it’s an “ok” solution at best. Most 
> security companies still think it’s great - because they’re not in possession 
> of all the facts. 
> 
> While I suspect I'll regret replying to this message, since so much of it is 
> off-topic for this discussion Forum, I do want to point out the attribution 
> error being made with correlation versus causation. You're making a specific 
> conclusion about why WebView-based sign-ins were banned, without any 
> supporting data, along with factually-suspect statements that are unsupported.

[PW] Why haven’t you provided any insight to suggest why I’m wrong, instead of 
asserting that I haven’t provided evidence to back up my assertions? 

But because you asked so nicely:

The following was published by Jonathan Skelker, Product Manager, Account 
Security Google April 2019

“[snip]… one form of phishing, known as “man in the middle 
<https://breakdev.org/evilginx-2-next-generation-of-phishing-2fa-tokens/>” 
(MITM), is hard to detect when an embedded browser framework (e.g., Chromium 
Embedded Framework <https://bitbucket.org/chromiumembedded/cef> - CEF) or 
another automation platform is being used for authentication. MITM intercepts 
the communications between a user and Google in real-time to gather the user’s 
credentials (including the second factor in some cases) and sign in. Because we 
can’t differentiate between a legitimate sign in and a MITM attack on these 
platforms, we will be blocking sign-ins from embedded browser frameworks 
starting in June. This is similar to the restriction on webview 
<https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html>
 sign-ins announced in April 2016.

https://security.googleblog.com/2019/04/better-protection-against-man-in-middle.html
 
<https://security.googleblog.com/2019/04/better-protection-against-man-in-middle.html>

I must extrapolate that if Google is unable to detect these attacks inside apps 
that use the Chromium embedded framework, it means it is unable to detect the 
same attacks inside desktop and mobile versions of Chrome. I don’t think any 
company can - I’m not pointing the finger at Google - I’m pointing it at the 
problem and lack of solutions to the problem globally. 

Separately, I wrote about the potential threats of WebView / frameworks that 
display web content inside apps in April 2015, so I’m very much aware of 
everything you say 
https://developer.metacert.com/blog/how-webview-has-weakened-the-tcb-of-the-web-infrastructure/
 
<https://developer.metacert.com/blog/how-webview-has-weakened-the-tcb-of-the-web-infrastructure/>
 In fact, it took people like me to bring these problems to the attention of 
their makers. While I didn’t turn this into a paper, it was referenced by 
senior security researchers at some big firms. Perhaps I should PDF it and you 
can call it a peer-reviewed paper :)

While there are many amazingly strange weaknesses with Google’s framework that 
make it less safe than a browser, it doesn’t mean that my conclusion is wrong. 
If Google (or any other company) was able to detect the attack OR the domain, 
it wouldn’t need to ban users from signing into Google websites. What other 
reason can there be?

You are taking everything personally. What hidden intents could I possible 
have? I have absolutely nothing to gain by encouraging browser vendors to 
implement better website identity solutions. I have nothing to gain by telling 
the world that even my security products are unable to detect every new 
phishing URI or website. What ulterior motives can I have? I’m not questioning 
your motives or that of Google’s. I believe everyone at Google and Mozilla is 
simply wrong about website identity, because they’re using their personal 
opinions or they’re asking the wrong questions. 

Is my conclusion wrong?

> 
> For example, this unsupported speculation conveniently ignores that, until 
> Android 8.0 (Oreo), WebView did not participate in Safe Browsing checks, for 
> example, 
> https://developer.android.com/guide/webapps/managing-webview#safe-browsing 
> <https://developer.android.com/guide/webapps/managing-webview#safe-browsing> 
> . It also seems, mysteriously, to ignore that WebView-based sign-ins and 
> credentials gives the hosting application full access to the user's cookies ( 
> https://developer.android.com/reference/android/webkit/CookieManager 
> <https://developer.android.com/reference/android/webkit/CookieManager> ). 
> It's an interesting theory that the reason for forbidding is phishing, but 
> conveniently ignores many facts in order to try and support it.
> 
> I suspect this is all the result of ignoring the stated reasons, such as 
> https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html
>  
> <https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html>
>  , and instead speculating about hidden intents. I'm hoping you could provide 
> data to support the claim being made?
> 
> WebAuthn-based security is something we can agree on being brilliant. 
> Finally. However, we will not see mainstream adoption for it ever, or for a 
> very long time. So, we need to do something to protect people for the next 5 
> years.
> 
> It would be remiss not to highlight how effortless incongruous this is with 
> the previous arguments. The discussion of phishing has, to date, been focused 
> on "large" targets - for example, Google, Microsoft Office 365, and PayPal. 
> This is perhaps unsurprising, as they're popular phishing targets. The 
> adoption of WebAuthN by those three sites would, thus, correspondingly have a 
> marked decrease in phishing.

Your last comment isn’t correct. So I’ll fix it, "The adoption of WebAuthN by 
those three sites *and all of their end-users/customers* would, thus, 
correspondingly have a marked decrease in phishing."

I’d love nothing more than for every website to support WebAuthN and for every 
consumer to adopt it. I have absolutely nothing negative to say about it. I’m 
commenting on my personal opinion on how likely it is to see mass adoption in 
the very near future. 

> 
> Amazingly, there's public data to support this hypothesis, 
> https://krebsonsecurity.com/2018/07/google-security-keys-neutralized-employee-phishing/
>  
> <https://krebsonsecurity.com/2018/07/google-security-keys-neutralized-employee-phishing/>
>  

We will absolutely see a decrease in phishing, *when* we see user adoption. I’m 
not debating that. 

- Paul

> 
> 

_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to