On Fri, Aug 14, 2020 at 1:53 AM Ronald Crane via dev-security-policy
<dev-security-policy@lists.mozilla.org> wrote:
>
> On 8/13/2020 3:18 PM, Tobias S. Josefowitz via dev-security-policy wrote:
> > So then, assuming we don't know, I don't think it would be appropriate
> > to just wish for the best, task the CAs to do it anyway, with the
> > option of threatening them with distrust later on if they are just!
> > not! good! enough! at it for some reason.
>
> Given the origin of this thread (report of CA issuing cert for obvious
> phishing domain that could be used to cause extensive damage to many
> people), this is rather facile. You seem to be arguing that because some
> edge cases will arise that will cause CAs (and domain registrars) some
> heartburn , we should not require CAs (and domain registrars) to avoid
> issuing certs (and domains) that are obviously useful for phishing.
> Clearly this phisher thought that it was useful to register a "phishy"
> domain rather than a non-"phishy" domain. This is some evidence that
> "phishy" domains are bad. Should CAs and registrars filter them?
> Possibly. I see no reason to discard this idea out of hand.

"phishy" domains being used in itself really do not prove anything, as
long as there are plausible explanations as to why they might be used
that preclude deeper conclusions about them; unless these explanations
can be shown to be false or at least likely to not be true.

In addition, I would contest that the question of phishy vs.
non-phishy is extremely decidable. If I were to make the claim that
https://multiwebportal.hsbc.de/MULTIVERSA-IFP/faces/login/login.jsf
was a phishing site, actually, what other than maybe common sense
tells you it is not? Does that make it an edge case? How do you get
common sense into algorithms? After browsers made it
next-to-impossible to operate a web site without a trusted
certificate, it would change the web *forever*, for the worse, if
certificate cost would be driven up in any significant way.

>
> > Even if examining domains as
> > strings usefully *should* impede phishing, that still leaves the
> > questions of why browsers would have the CAs do that for them as
> > opposed to running the phish-decider themselves.
>
> Maybe because more than one layer of protection is usually better than
> only one? Maybe because registrars and CAs profit from the internet, and
> so they should also help proactively to improve its safety, rather than
> doing only the bare minimum that the BRs can be read to require?

Honestly I would be delighted if CAs collectively were anywhere near
doing the bare minimum that the BRs require. I do not see good reasons
for making CAs the content police, but many reasons against.

> It would be wonderful to have a single sovereign remedy for all the
> internet's problems. We haven't so far, and I doubt very much that we
> ever will (but please write an RFC if you think you do). The physical
> world is awash in whack-a-mole problems, and the internet, to all
> appearances, is the same.

I agree it would be wonderful, but I never suggested there could or
would be one. I am just not looking to waste my time, or anyones, on
efforts that provide no mid- or long term benefit.

Tobi
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to