The Security Question No One Asks
The (Next) Future of Security
When I attended my first RSA conference a few weeks ago, I noticed something: Seemingly everyone is doing behavior recognition.
Here's one cool implementation: When Vendor X identifies atypical user behavior, that user's privileges are reduced (or revoked) until they reauthenticate. An employee enters their password in .5 seconds, rather than their usual 1.8 to 1.9 seconds? Their account gets locked until they call IT. An employee tries to download an entire database when all they usually do is write a few queries? The download is blocked until they give their mother's maiden name.
This approach recognizes that authentication is a spectrum. After all, blocking anomalous behavior protects against employees turned bad actors, like an executive who copies proprietary data before quitting and joining a rival company.
Why Behavior Recognition Works
This approach works (or is supposed to) because we constantly produce torrents of behavior data.
Most security companies talk about behavior data in impersonal terms: typing speed, normal workflow, etc. But it doesn't end there. For example, behavior extends to non-work locations we login from (home address, romantic partner's address) and the earliest and latest times we do so (when we wake up and go to sleep). Behavior data is anything but impersonal.
The good news is that the more data gets collected, the harder it is for hackers to mimic legitimate users, and the safer our data will be from them. At least for a while.
This approach will almost certainly work in the short term. The hacking software ecosystem is designed around status quo approaches to security. That's why there are so many freely-available tools for cracking passwords. Many fewer exist for imitating a specific user's password typing cadence. (Fun fact: In Morse Code, one's unique style of tapping out a message is referred to as their "fist.")
Why Behavior Recognition Will Fail
Behavior recognition only works when you know what normal behavior is. But humans change over time: between any two days, between two different seasons (do you take more sick days in summer or winter?), and over the course of multiple years.
Any company that implements behavior recognition must therefore collect user data over long periods of time. To stay ahead of hackers, and their competitors, they will also have to look deep, gathering increasingly personal information. This is familiar territory in Silicon Valley: Collect as much information about your users as possible.
And then, inevitably, there will be a hack. And one big problem with basing security on user behavior, rather than passwords, is that changing passwords is easy. Changing your "fist"? Probably very difficult.
Another weakness in behavior-based security is that a hack can leave users at greater risk of physical attack. What happens when a violent ex-partner is able to purchase hacked data on where my new apartment is, where and when I go on vacation, or where my new lover lives (based on the location I login from at night that's not my home address)?
The Unasked Question: "What kind of security do you want?"
We all know there's no perfect security solution. Most of us would say that the important thing is implementing the right security solution: Identifying likely attack vectors, calculating impacts of different kinds of breaches, and creating an effective solution within the available budget.
In the short term, behavior recognition-based security is very promising. In the long term, it's quite troubling. But for someone more worried about identity thieves than ex-partners, that trade could be worthwhile.
That leads us to a question no one in cybersecurity never asks: "What kind of security do you, the user, want?"
Let's agree that implementing different security strategies for different users isn't easily done. The most we can usually offer now is the option to add two-factor authentication and the like. To base security on what the individual user wants would require building a new system of data security from the ground up.
In the meantime we should ask ourselves, "What kind of security does the user want?"