On Sun, Apr 09, 2017 at 07:51:09 -0400, Robert J. Hansen wrote: > In the real world, threat models are much simpler. Basically, you're > either dealing with Mossad or not-Mossad. If your adversary is > not-Mossad, then you’ll probably be fine if you pick a good password > and don’t respond to emails from > [email protected]. If your adversary is the > Mossad, YOU'RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT > IT. The Mossad is not intimidated by the fact that you employ > https://.
(Don't get me wrong---I like James Mickens; I watched an MIT course he partially taught, and I was rather fond of him. But this is a dangerous article, and hard to distinguish between satire and actual security advice. And there's both.) This type of defeatism is just as absurd as putting your faith in snake oil or failing to even contemplate a threat model before blindly following others' advice. In fact, the latter is precisely what this is---not from the author's standpoint, but from the reader's. Security is not binary (or ternary, in that article). You're not just dealing with "Mossad or not-Mossad". You're dealing with a wide range of adversaries from your grandmother who gets on your computer when you're still logged into your dating website, to script kiddies who discovered intro to Metasploit articles, to script kiddies at the CIA and NSA, to actual targeted attacks/surveillance by a State, to the guy who's going to break and then re-break your knee caps until you give him what he wants. If I know a threat exists, I'm going to evaluate my threat model and decide whether or not it is worth my time to mitigate it; whether I can hope to mitigate it; and whether attempting to do so is going to put me at even more risk for some other threat. I just gave a talk at LP2017 about "The Surreptitious Assault On Privacy, Security, and Freedom". The talk was focused on some threats that might actually be applicable to the audience. There weren't discussions about drone targeting or kneecap breaking or NSA interception of packages. There wasn't discussion about tapping underseas cables. And yet, the sophistication of the threats in the presentation were such that I didn't get to a fraction of what I wanted to discuss. Most people aren't going to have to worry about the CIA taking control of their stupid 4G-enabled, always-connected vehicle to assassinate them or abduct their children. But the attacks and surveillance methods the CIA and NSA use on these types of things---as revealed by Vault 7, Shadow Brokers, Snowden, Klein, and others---can be discovered or performed by other bad actors. And they are. So defeatist attitudes toward State actors make you immediately vulnerable to less skilled, less resourceful attackers. Using HTTPS doesn't protect me against a lot of things. But it does protect me from many things. > Once you assume that your opponent is specifically targeting you with > malware capable of sophisticated memory forensics, you're screwed. Again, defeatist. For your average user, yeah, they're screwed just by using technology in the first place---if not by crackers, then by adversaries like the companies they're feeding data to. But _I_ could target someone with memory forensics "malware", and I'm not a cracker! If not through an exploit for the slew of vulnerable systems users use, then through physical compromise of their computer. Maybe pay out an evil maid. I've never tried a cold boot attack, but maybe I'd have some luck with that. We're not talking about State-level knowledge here---we're talking about using existing tools; we're talking about a privilege escalation vulnerability; we're talking about data swapping to disk; we're talking about Heartbleed, and Cloudbleed, and many other such bugs; ...and so on! > Pinning your hopes on a smartcard is the worst kind of crypto-fetishism. > You can't proudly hold it up and say "ah ha, but *now* I am safe from > Tier-1 actors!" It doesn't work that way. > > Smartcards are a great technology for a certain part of the problem > domain, but they aren't magical crypto fairy dust. Nor should anyone think they are. But it's sure as hell a smaller attack surface than the, uh, near-unlimited attack surface of an Internet-connected computer (or mobile device!) that most people store their private keys on. I use a Smartcard because the attack surface is otherwise enormous---I cannot audit whether my key has been compromised. I don't have the time or resources. I like to believe my key was reasonably secure. But I generated a new one about a year ago, got me a Smartcard, stored the master key offline, and access it using an airgapped computer. Does that prevent me from being pwned by a committed adversary? No, not even close. But I can enumerate many such attacks against my current setup. And they're far fewer than the near innumerable number against my previous situation. If someone's setting up a GPG key, am I going to suggest to them that they use a Smarcard? _Of course_ I am! I'd rather do that then spend the next few months educating them on portions of a relevant threat model and do-this but no don't-do-that but oh that means you can't use the Internet at all, sorry! And by the time I'm done explaining that, there's be another catchily-named vulnerability out there peeking out from the stockpile of CVEs that have made their way into pentesting frameworks with a click-to-pwn usability level. Do I think Mickens is going to stand there and tell Karen Sandler that she shouldn't give a care about the security of her pacemaker because someone can season her cup of noodles with uranium? No, I don't. -- Mike Gerwitz Free Software Hacker+Activist | GNU Maintainer & Volunteer GPG: D6E9 B930 028A 6C38 F43B 2388 FEF6 3574 5E6F 6D05 https://mikegerwitz.com
signature.asc
Description: PGP signature
_______________________________________________ Gnupg-users mailing list [email protected] http://lists.gnupg.org/mailman/listinfo/gnupg-users
