Blaming users for security incidents is counterproductive

The Associated Press has done some important research into the cause of cybersecurity incidents in the federal government. Unfortunately, they come to the wrong conclusion. They document the huge rise in security incidents, and then add:

And [federal] employees are to blame for at least half of the problems.

Specifically, not because the employees are the hackers, but because

They have clicked links in bogus phishing emails, opened malware-laden websites and been tricked by scammers into sharing information.

This is counterproductive. It blames end users for problems that the security community should be taking accountability for. It reminds me of how Snapchat blamed users for plugging third-party apps into their system instead of acknowledging that the product has inherent security limitations. We need to build easy-to-use security tools that take advantage of the already-available cryptography for privacy and trust.

Let’s take a “simple” example of a typical attack, which plays out like this:

  1. The attacker does some research on their target to understand who they work with.
  2. The attacker crafts a realistic but malicious PDF that seems to come from someone the user trusts.
  3. They email the PDF with a forged “From” address.
  4. The target user receives the PDF inside their corporate firewall and tries to open it.
  5. The malware infects their computer and now has access to the internal network.

When we tell users “Don’t click on attachments” we’re trying to make up for the major and systematic flaws in computer security tools:

  1. Email was never designed to verify senders. Email is very easy to forge.
  2. Email cryptography tools are difficult to use, and getting a public key costs money. All email systems accept unsigned email because otherwise users would never be able to communicate with one-another.
  3. Network firewalls can’t always detect malicious email payloads.
  4. Computer applications like PDF readers have programming errors that allow malicious code to be executed.
  5. Computer operating systems can’t sandbox those applications effectively enough.

We’re asking users to make complex decisions when they don’t have the expertise or even the tools and information necessary to make those decisions. Policies like “don’t click attachments” or even “pick a unique and good password” are not very realistic: Users need attachments to get their jobs done, and they cannot memorize unique strong passwords for every web site.

Of course, there will never be a 100% technical solution because it’s the users who will always need to make security decisions: what systems to trust,  the cost and convenience trade-offs, and ethics in using computer systems. We need to build tools to help users make those decisions. As Bruce Schneier wrote in a great article about phishing:

Only amateurs attack machines; professionals target people. And the professionals are getting better and better.

But let’s be realistic about the state of computer security today. We need to build better tools to help users with privacy, cryptography, and trust. And we need the end users on our side.

P.S. Thanks to our friends at Gluu for the discussion on Twitter that inspired this post, and I believe Martha Mendoza wrote the original article.