October 15, 2016

         by Bruce Schneier
   CTO, Resilient, an IBM Company

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <>.

You can read this issue on the web at <>. These same essays and news items appear in the "Schneier on Security" blog at <>, along with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:
     Security Economics of the Internet of Things
     Cybersecurity Issues for the Next Administration
     Security Design: Stop Trying to Fix the User
     Schneier News
     Recovering an iPhone 5c Passcode
     The Hacking of Yahoo

** *** ***** ******* *********** *************

     Security Economics of the Internet of Things

Brian Krebs is a popular reporter on the cybersecurity beat. He regularly exposes cybercriminals and their tactics, and consequently is regularly a target of their ire. Last month, he wrote about an online attack-for-hire service that resulted in the arrest of the two proprietors. In the aftermath, his site was taken down by a massive DDoS attack.

In many ways, this is nothing new. Distributed denial-of-service attacks are a family of attacks that cause websites and other Internet-connected systems to crash by overloading them with traffic. The "distributed" part means that other insecure computers on the Internet -- sometimes in the millions -- are recruited to a botnet to unwittingly participate in the attack. The tactics are decades old; DDoS attacks are perpetrated by lone hackers trying to be annoying, criminals trying to extort money, and governments testing their tactics. There are defenses, and there are companies that offer DDoS mitigation services for hire.

Basically, it's a size vs. size game. If the attackers can cobble together a fire hose of data bigger than the defender's capability to cope with, they win. If the defenders can increase their capability in the face of attack, they win.

What was new about the Krebs attack was both the massive scale and the particular devices the attackers recruited. Instead of using traditional computers for their botnet, they used CCTV cameras, digital video recorders, home routers, and other embedded computers attached to the Internet as part of the Internet of Things.

Much has been written about how the IoT is wildly insecure. In fact, the software used to attack Krebs was simple and amateurish. What this attack demonstrates is that the economics of the IoT mean that it will remain insecure unless government steps in to fix the problem. This is a market failure that can't get fixed on its own.

Our computers and smartphones are as secure as they are because there are teams of security engineers working on the problem. Companies like Microsoft, Apple, and Google spend a lot of time testing their code before it's released, and quickly patch vulnerabilities when they're discovered. Those companies can support such teams because those companies make a huge amount of money, either directly or indirectly, from their software -- and, in part, compete on its security. This isn't true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don't have the expertise to make them secure.

Even worse, most of these devices don't have any way to be patched. Even though the source code to the botnet that attacked Krebs has been made public, we can't update the affected devices. Microsoft delivers security patches to your computer once a month. Apple does it just as regularly, but not on a fixed schedule. But the only way for you to update the firmware in your home router is to throw it away and buy a new one.

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn't true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

The market can't fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don't care. Their devices were cheap to buy, they still work, and they don't even know Brian. The sellers of those devices don't care: they're now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it's an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don't care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

Of course, this would only be a domestic solution to an international problem. The Internet is global, and attackers can just as easily build a botnet out of IoT devices from Asia as from the United States. Long term, we need to build an Internet that is resilient against attacks like this. But that's a long time coming. In the meantime, you can expect more attacks that leverage insecure IoT devices.

This essay previously appeared on Vice Motherboard.

My previous essay on IoT Security:

Source code for attack made public:

Here are some of the things that are vulnerable.

Slashdot thread.

** *** ***** ******* *********** *************

     Cybersecurity Issues for the Next Administration

On today's Internet, too much power is concentrated in too few hands. In the early days of the Internet, individuals were empowered. Now governments and corporations hold the balance of power. If we are to leave a better Internet for the next generations, governments need to rebalance Internet power more towards the individual. This means several things.

First, less surveillance. Surveillance has become the business model of the Internet, and an aspect that is appealing to governments worldwide. While computers make it easier to collect data, and networks to aggregate it, governments should do more to ensure that any surveillance is exceptional, transparent, regulated and targeted. It's a tall order; governments such as that of the US need to overcome their own mass-surveillance desires, and at the same time implement regulations to fetter the ability of Internet companies to do the same.

Second, less censorship. The early days of the Internet were free of censorship, but no more. Many countries censor their Internet for a variety of political and moral reasons, and many large social networking platforms do the same thing for business reasons. Turkey censors anti-government political speech; many countries censor pornography. Facebook has censored both nudity and videos of police brutality. Governments need to commit to the free flow of information, and to make it harder for others to censor.

Third, less propaganda. One of the side-effects of free speech is erroneous speech. This naturally corrects itself when everybody can speak, but an Internet with centralized power is one that invites propaganda. For example, both China and Russia actively use propagandists to influence public opinion on social media. The more governments can do to counter propaganda in all forms, the better we all are.

And fourth, less use control. Governments need to ensure that our Internet systems are open and not closed, that neither totalitarian governments nor large corporations can limit what we do on them. This includes limits on what apps you can run on your smartphone, or what you can do with the digital files you purchase or are collected by the digital devices you own. Controls inhibit innovation: technical, business, and social.

Solutions require both corporate regulation and international cooperation. They require Internet governance to remain in the hands of the global community of engineers, companies, civil society groups, and Internet users. They require governments to be agile in the face of an ever-evolving Internet. And they'll result in more power and control to the individual and less to powerful institutions. That's how we built an Internet that enshrined the best of our societies, and that's how we'll keep it that way for future generations.

This essay previously appeared on, in a section about issues for the next president. It was supposed to appear in the print magazine, but was preempted by Donald Trump coverage.

** *** ***** ******* *********** *************


Research paper: "Security and Privacy Vulnerabilities of In-Car Wireless Networks: A Tire Pressure Monitoring System Case Study."

Hacking bridge-hand generation software:

"Periscope skimmers" are the most sophisticated kind of ATM skimmers. They are entirely inside the ATM, meaning they're impossible to notice. They've been found in the US.

This is an interesting back-and-forth on the equities debate: initial post by Dave Aitel and Matt Tait, a reply by Mailyn Filder, a short reply by Aitel, and a reply to the reply by Filder.

Two good essays on the NSA's "Upstream" data collection under Section 702:

Impressive remote hack of the Tesla Model S.
The vulnerability has been fixed.
Remember, a modern car isn't an automobile with a computer in it. It's a computer with four wheels and an engine. Actually, it's a distributed 20-400-computer system with four wheels and an engine.

I like this Amtrak security awareness campaign. Especially the use of my term "security theater."

Jailbreaking the iPhone 7 took 24 hours.

Brian Krebs writes about the massive DDoS attack against his site.

Neural networks are good at identifying faces, even if they're blurry:

A new malware tries to detect if it's running in a virtual machine or sandboxed test environment by looking for signs of normal use and not executing if they're not there.

Interesting research from Sasha Romanosky at RAND on the cost of cyberattacks. Short answer: it's less than you probably think.

Interesting survey of the cybersecurity culture in Norway.

This article on US/China cooperation and competition in cyberspace is an interesting lens through which to examine security policy.

Forbes is reporting that the Israeli cyberweapons arms manufacturer Wintego has a man-in-the-middle exploit against WhatsApp. It's a weird story. I'm not sure how they do it, but something doesn't sound right.

There's a new French credit card where the CVV code changes every hour.

This paper wins "best abstract" award: "Quantum Tokens for Digital Signatures," by Shalev Ben David and Or Sattath:

Yahoo scanned everyone's e-mails for the NSA.
Other companies have been quick to deny that they did the same thing, but I generally don't believe those carefully worded statements about what they have and haven't done. We do know that the NSA uses bribery, coercion, threat, legal compulsion, and outright theft to get what they want. We just don't know which one they use in which case.

The NSA has another contractor who stole classified documents. It's a weird story: "But more than a month later, the authorities cannot say with certainty whether Mr. Martin leaked the information, passed them on to a third party or whether he simply downloaded them." So maybe a potential leaker. Or a spy. Or just a document collector.
My guess is that there are many leakers inside the US government, even more than what's on this list from last year.

Indiana's voter registration database is frighteningly insecure. You can edit anyone's information you want.

Richard Thieme gave a talk on the psychological impact of doing classified intelligence work.

TU Delft is running a free online class in cybersecurity economics.

Interesting data and analysis on the psychology of bad password habits.

Interesting research in Nature arguing that murder is a relatively poor evolutionary strategy.

** *** ***** ******* *********** *************

     Security Design: Stop Trying to Fix the User

Every few years, a researcher replicates a security study by littering USB sticks around an organization's grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as "teachable moments" for others. "If only everyone was more security aware and had more security training," they say, "the Internet would be a much safer place."

Enough of that. The problem isn't the users: it's that we've designed our computer systems' security so badly that we demand the user do all of these counterintuitive things. Why can't users choose easy-to-remember passwords? Why can't they click on links in emails with wild abandon? Why can't they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?

Traditionally, we've thought about security and usability as a trade-off: a more secure system is less functional and more annoying, and a more capable, flexible, and powerful system is less secure. This "either/or" thinking results in systems that are neither usable nor secure.

Our industry is littered with examples. First: security warnings. Despite researchers' good intentions, these warnings just inure people to them. I've read dozens of studies about how to get people to pay attention to security warnings. We can tweak their wording, highlight them in red, and jiggle them on the screen, but nothing works because users know the warnings are invariably meaningless. They don't see "the certificate has expired; are you sure you want to go to this webpage?" They see, "I'm an annoying message preventing you from reading a webpage. Click here to get rid of me."

Next: passwords. It makes no sense to force users to generate passwords for websites they only log in to once or twice a year. Users realize this: they store those passwords in their browsers, or they never even bother trying to remember them, using the "I forgot my password" link as a way to bypass the system completely -- effectively falling back on the security of their e-mail account.

And finally: phishing links. Users are free to click around the Web until they encounter a link to a phishing website. Then everyone wants to know how to train the user not to click on suspicious links. But you can't train users not to click on links when you've spent the past two decades teaching them that links are there to be clicked.

We must stop trying to fix the user to achieve security. We'll never get there, and research toward those goals just obscures the real problems. Usable security does not mean "getting people to do what we want." It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users' security goals without -- as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it -- "stress of mind, or knowledge of a long series of rules."

I've been saying this for years. Security usability guru (and one of the guest editors of this issue) M. Angela Sasse has been saying it even longer. People -- and developers -- are finally starting to listen. Many security updates happen automatically so users don't have to remember to manually update their systems. Opening a Word or Excel document inside Google Docs isolates it from the user's system so they don't have to worry about embedded malware. And programs can run in sandboxes that don't compromise the entire computer. We've come a long way, but we have a lot further to go.

"Blame the victim" thinking is older than the Internet, of course. But that doesn't make it right. We owe it to our users to make the Information Age a safe place for everyone -- not just those with "security awareness."

This essay previously appeared in the Sep/Oct issue of IEEE Security & Privacy.


** *** ***** ******* *********** *************

     Schneier News

I'm speaking in Sydney on October 20 at the Australia Information Security Association Conference.

I'm also speaking at the University of Sydney Centre for International Security Studies on October 20, in the evening.

I'm speaking in Berlin on October 24 at the ISF 27th Annual World Congress.

** *** ***** ******* *********** *************

     Recovering an iPhone 5c Passcode

Remember the San Bernardino killer's iPhone, and how the FBI maintained that they couldn't get the encryption key without Apple providing them with a universal backdoor? Many of us computer-security experts said that they were wrong, and there were several possible techniques they could use. One of them was manually removing the flash chip from the phone, extracting the memory, and then running a brute-force attack without worrying about the phone deleting the key. The FBI said it was impossible. We all said they were wrong. Now, Sergei Skorobogatov at Cambridge University has proved them wrong.

Susan Landau explains why this is important:

    The moral of the story? It's not, as the FBI has been
    requesting, a bill to make it easier to access encrypted
    communications, as in the proposed revised Burr-Feinstein bill.
    Such "solutions" would make us less secure, not more so.
    Instead we need to increase law enforcement's capabilities to
    handle encrypted communications and devices. This will also
    take more funding as well as redirection of efforts. Increased
    security of our devices and simultaneous increased capabilities
    of law enforcement are the only sensible approach to a world
    where securing the bits, whether of health data, financial
    information, or private emails, has become of paramount

Or: The FBI needs computer-security expertise, not backdoors.

Landau's commentary:

Patrick Ball writes about the dangers of backdoors:

** *** ***** ******* *********** *************

     The Hacking of Yahoo

Last month, Yahoo! announced that it was hacked pretty massively in 2014. Over half a billion usernames and passwords were affected, making this the largest data breach of all time.

Yahoo! claimed it was a government that did it:

    A recent investigation by Yahoo! Inc. has confirmed that a copy
    of certain user account information was stolen from the
    company's network in late 2014 by what it believes is a
    state-sponsored actor.

I did a bunch of press interviews after the hack, and repeatedly said that "state-sponsored actor" is often code for "please don't blame us for our shoddy security because it was a really sophisticated attacker and we can't be expected to defend ourselves against that."

Well, it turns out that Yahoo! had shoddy security and it was a bunch of criminals that hacked them. The first story is from the New York Times, and outlines the many ways Yahoo! ignored security issues.

    But when it came time to commit meaningful dollars to improve
    Yahoo's security infrastructure, Ms. Mayer repeatedly clashed
    with Mr. Stamos, according to the current and former employees.
    She denied Yahoo's security team financial resources and put
    off proactive security defenses, including intrusion-detection
    mechanisms for Yahoo's production systems.

The second story is from the Wall Street Journal:

    InfoArmor said the hackers, whom it calls "Group E," have sold
    the entire Yahoo database at least three times, including one
    sale to a state-sponsored actor. But the hackers are engaged in
    a moneymaking enterprise and have "a significant criminal track
    record," selling data to other criminals for spam or to
    affiliate marketers who aren't acting on behalf of any
    government, said Andrew Komarov, chief intelligence officer
    with InfoArmor Inc.

    That is not the profile of a state-sponsored hacker, Mr.
    Komarov said. "We don't see any reason to say that it's state
    sponsored," he said. "Their clients are state sponsored, but
    not the actual hackers."

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a "security guru" by The Economist. He is the author of 13 books -- including his latest, "Data and Goliath" -- as well as hundreds of articles, essays, and academic papers. His influential newsletter "Crypto-Gram" and his blog "Schneier on Security" are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation's Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient, an IBM Company. See <>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient, an IBM Company.

Copyright (c) 2016 by Bruce Schneier.

** *** ***** ******* *********** *************

To unsubscribe from Crypto-Gram, click this link:

You will be e-mailed a confirmation message.  Follow the instructions in that 
message to confirm your removal from the list.

Reply via email to