> On Oct 9, 2019, at 10:42 AM, Ronald Crane via dev-security-policy > <email@example.com> wrote: > > On 10/2/2019 3:50 PM, Paul Walsh via dev-security-policy wrote: > > [snip] >>>> sɑlesforce[.com] is available for purchase right now. >>> I was going to suggest banning non-Latin-glyph domains, since they are yet >>> another useful phishing weapon. FF converts all such domains into Punycode >>> when typed or pasted into the address bar, though the conversion is >>> displayed below the address bar, not in it. So your example becomes >>> "http://xn--slesforce-51d.com/". >> Just providing an example of a URL that uses .com. I can provide more >> without using special characters to demonstrate the same point. > > Well, I'm sure that many domains containing "salesforce" presently are > unregistered, e.g., "salesforcecorp.com". This fact supports the idea that > internet entities should make a concerted effort to clean up their namespaces > as I noted previously. Of course, that should be one among many other > approaches to reducing phishing….
[PW] I agree. > > Elsewhere in this thread I proposed a foundation-run *whitelist* of authentic > domains that browsers could use to warn users about potential phishing sites > (e.g., "paypal.com" is in the whitelist, but the ~20,000 other nonauthentic > domains containing "paypal" are not). This approach would reduce the need for > users to examine domains to determine authenticity. What's your view on it? [PW] I agree. And such lists exist already. At MetaCert we aggregate all open source lists that are available. We have our own community with a few thousand members who report and validate suspicious links every day, while also submitting and validating links that should be verified as safe. These all go into one database and served with an API that also covers 3,500+ shortening services. So, call the API and get a response in 270ms. But this is not good enough... We eradicated phishing for the crypto world on Slack with a security integration in Q4 2017 - it was rampant beyond belief. As soon as a phishing attack was discovered, reviewed, validated and classified, messages that contained those links in other communities, would be auto deleted from other Slack. There were times when we classified scams in less than 2 minutes. We even have software with machine-learning capabilities listening to the Twitter firehose - it detects signals attributed to scams, follows the thread, finds the URL or digital wallet address, detects and classifies. BUT, we came to learn that no matter how fast we get, no matter how much tech and people we throw at the problem, there will always be victims. It is technically impossible to detect every new dangerous URL or website. My team and I have even written a white paper, technical paper and mathematical equations for a crypto token to incentivize the decentralization of the decision making process. This took about 18 months. All of this, and our R&D into visual indicators and URL classification dating back to 2004 helped us to conclude that chasing after threats just isn’t effective enough. I would argue that we have built the most advance URL-based threat intelligence system by an order of magnitude as it can also classify folders on sites like GitHub in a way others can't - but I’m losing faith and conviction in the entire threat model. It’s so much easier to tell someone what’s safe, than it is to detect what’s dangerous. So, I agree with you Ronald - your suggestion is a great one. But I’m afraid it doesn’t solve the problem in the same way that website identity does - as I described previously. This is not a popular belief - I never seem to pick things that are easy. - Paul > > -R > > _______________________________________________ > dev-security-policy mailing list > firstname.lastname@example.org > https://lists.mozilla.org/listinfo/dev-security-policy _______________________________________________ dev-security-policy mailing list email@example.com https://lists.mozilla.org/listinfo/dev-security-policy