On Thu, Jan 16, 2014 at 1828 (UTC), Claudio Moretti <[email protected]>wrote:
> On Tue, Jan 14, 2014 at 0310 (UTC), Drake, Brian <[email protected]>wrote: > >> [snip] >> > >> > I guess those are valid points. To address the issue of reading the >> e-mails, I will quote from Yan [1]: >> >> You'd have to make it very very clear to the user that they are >>> disclosing their IP address and what website they were visiting by sending >>> a rule to the server. >>> >> >> To address both reading and writing, I will quote from “How to Deploy >> HTTPS Correctly” [2]: >> >>> HTTPS provides the baseline of safety for web application users, and >>> there is no performance- or cost-based reason to stick with HTTP. Web >>> application providers undermine their business models when, by continuing >>> to use HTTP, they enable a wide range of attackers anywhere on the internet >>> to compromise users' information. >>> >> >> If HTTP is not acceptable in web applications, regardless of what >> information is being transferred through them, then shouldn’t non-secure >> e-mail (which in practice seems to be virtually all e-mail) be equally >> unacceptable? >> > > My point is that we're talking about two completely different technologies: > > HTTP websites are insecure by nature, but you _visit_ them, so you may be > unaware that they are recording your visit (HTTPS can't deal with this, of > course) or that there's a third party that sniffs your traffic (and here > HTTPS comes in handy). > > When you send an email, you are _aware_ that you are disclosing personal > information, at least you email address. If you have something to hide, you > can _decide_ not to send an email, but you can't decide what a webserver is > going to store or if somebody is looking at your traffic. > > So you can't relate emails and HTTP in terms of privacy, because they have > two completely different scopes. > Are you aware that you’re disclosing your e-mail address? Probably, though not necessarily. What about other information? [1] Perhaps I’m being unreasonably picky here. Relating e-mails and HTTP in terms of third parties sniffing the traffic still seems valid. I believe there is, it should be SpamAssassin (just guessing), and also > remember that you can only send an email to the list if you are subscribed > to it. If you're not subscribed, your message doesn't go through unless > approved. > I was not aware that the sender needs to be a subscriber or the message needs to be approved. It makes a lot more sense now! > To try to address the fingerprinting concerns, we could try to check the >>> rulesets against what others see. We could do, for rulesets, what people >>> already do for certificates: >>> - centralised approach: SSL Observatory >>> >> >> Cool, but needs maintenance and as Yan said the EFF tech staff is >>> already overloaded >>> >> >> I’m not expecting the EFF to implement this (at least for now), but >> hopefully someone else out there has the time and the knowledge to do it. >> > > The problem here is privacy concerns: how could we guarantee that the > third party respects EFF rules regarding privacy matters? It's a really > complicated topic... > Luckily, we’re open about everything and allow users to change the rules themselves, so this probably shouldn’t be a priority for us. > - distributed approach: Perspectives [1] – this should have a custom >>>> notary list, of course >>> >>> >>> Also really cool, but rules change faster than SSL certificates, so I'm >>> not really sure how this approach could be implemented effectively. >>> Remember that we are also focusing on speed, so the overhead caused by >>> checking rules against notaries every time you open a website might be a >>> little too much, and it may prove to be far less useful than checking SSL >>> certificates. >>> >> >> I agree, checking every time you access a website makes sense for >> certificates but not for rulesets. I was thinking of doing it every time >> the rulesets are updated, which I’m guessing would be once a day, but I’m >> not familiar with update mechanisms like this. >> >> > For 10000+ rules it might be difficult. Even if you enable it only against > newly added rules, first-time HTTPS-E users would have to wait a really > long time to get all the rulesets checked the first time, and they might be > discouraged in using the extension. > > Maybe trying to find a way to check hashes? Still complicated and very > expensive in terms of computational power, but still a little better.. > Benchmarks could be really useful, or we might leave it disabled by default > and add a warning that says "if you enable this, it's probably going to > kill you internet speed until it's finished, which depending on your ISP > might take anything between 1 hour and 1 decade" :P > It seems very unlikely that users would choose 10000+ ruleset sources, though. I imagined a much smaller number of ruleset sources (each of which provides a single piece of data containing many rulesets), each of which was verified using a hash. If an update to a ruleset source is so recent that it’s too hard to verify that hash against others’ observations, then keep using an old version until more observations become available. Anyway, like I said above, I can see why this shouldn’t be a priority for us. Cheers, > > Claudio > [1] https://support.google.com/mail/answer/26903 (note: I removed John from the CC list because this doesn’t seem to relate to the topic of John’s message. I don’t know what the accepted behaviour is in this regard.) -- Brian Drake All content created by me: Copyright<http://www.wipo.int/treaties/en/ip/berne/trtdocs_wo001.html>© 2014 Brian Drake. All rights reserved.
_______________________________________________ HTTPS-Everywhere mailing list [email protected] https://lists.eff.org/mailman/listinfo/https-everywhere
