Will AI solve our cybersecurity crisis?

Artificial intelligence has become a key weapon in the fight against cyber crooks, rogue hackers, and aggressive nation states, but it's not a magic fix. Experts weigh in on where AI makes sense in an enterprise security strategy, and what's best left to human judgement.

You don’t need to have been victimized by the WannaCry ransomware—or worried about hack attacks on presidential elections—to understand that cybersecurity is the most pressing technology problem of our time and may soon become the biggest problem, period.

The threats are constant and relentless:

Fending off the onslaught of attacks is a nearly insurmountable task for security professionals. But it's a perfect job for machines that can parse thousands of logs a second and identify potential threats a human might not even see. That’s why artificial intelligence (AI) has become a key weapon in the fight against cyber crooks, rogue hackers, and aggressive nation states.

But experts also warn that AI is not a magic fix. Machine learning (ML) systems are only as good as the data used to train them. AI produces more false positives than humans in many instances. And every technique used to fight attacks invariably gets co-opted by the attackers themselves. What happens when the bad guys start using AI against us?

Exascale: A Race to the Future of HPC

The need for speed

Just as algorithms replaced humans for automated stock trading, AI will be needed to keep pace with constantly morphing malware and attack vectors, says Rick Grinnell, founder and managing partner of Glasswing Ventures, a venture firm focused on AI startups.

“Even the best human brains ultimately won’t be able to keep up with this pace of changing attack strategy,” he says. “Even if they could, it would be impossible to push the new defense patch or update to each endpoint, device, or network in time to prevent or stop an attack. Fast-to-react AI-based solutions will be required at multiple points in the network, from the endpoint through the various layers of public and private networks.”

One of the most pernicious attacks—and among the hardest to detect and thwart – are advanced persistent threats (APTs), where attackers quietly take up residence on a target’s network for months to observe user behavior and perfect their attacks. That’s how Russian security forces compromised the servers of the Democratic National Committee and how Chinese cyber attackers infiltrated government computers in more than 100 countries in 2009.

AI can help here too, says Oliver Tavakoli, chief technology officer at Vectra Networks, a maker of AI-based threat detection and response systems.

“AI does provide some opportunities for defenders to get out ahead of attackers by being in customer environments before the attacker arrives and being able to observe the normal ebb and flow of everyday traffic,” he says. “This knowledge, if obtained and used properly, makes it difficult for an attacker without deep knowledge of the target environment to accomplish more brazen attacks that dominate the news without setting off many alarms.”

In fact, it may be hard to find critical network infrastructure—routers, sensors, firewalls, intrusion detection appliances, servers, and the like—without some AI built in, often via machine learning algorithms that parse billions of data points looking for anomalous behavior that could indicate signs of an attack.  

"Machine learning and AI are baked into products more and more,” notes Jon Oltsik, senior principal analyst at research and consulting firm Enterprise Strategy Group (ESG Global). “Even if you're not looking to use AI for security, you may already be using it and not know about it."

Mobile security firm Zimperium uses machine learning to refine its threat models in a way humans simply can’t, says John Michelsen, chief product officer.

“Our training lab uses almost 20 billion data points,” he says. “You can’t just ask a team of individuals to sort through all that data and them us how to determine if we see certain threats. It’s just not possible for humans to pull this off. It requires an entire Amazon cluster to do our compute, literally dozens of machines for many hours.”

AI cannot apply the discretion and creativity we humans do. The big difference between robots and humans is that we're curious. We ask really weird, non-intuitive, sometimes silly questions, and we make serendipitous discoveries that we connect to topics that you wouldn't logically connect to.

Charles Caldwell Vice president of customer success, Logi Analytics

The limits of artificial intelligence

Thinking of AI as a silver bullet would be a fatal error, says Simon Crosby, CTO of security firm Bromium.

AI isn’t necessarily any better at detecting malware than traditional antivirus software, because an AI system can’t look at a piece of code and know whether it’s good or bad, he says. Machine-learning-based detection algorithms are heavily dependent on the data sets they’re exposed to, and they’re black box solutions—there’s usually no way for humans to understand how the machine arrived at a decision.

“Training a machine learning engine using human experts seems like a great idea, but that assumes attackers won’t vary their behavior, and self-learned categories are often impossible for humans to understand,” says Crosby. “So when the ML system delivers an alert, you still have to do the hard work of understanding whether it is a false positive or not.”

Crosby says it’s better to treat AI as a force multiplier—an assistant that helps automate mundane and repetitive tasks, but not a replacement for human judgment.

“What AI can do is look through vast amounts of data we’re getting from all sorts of systems and pick out anomalous behaviors,” he says. “There it will be profoundly useful. These tools generally ought to be thought of as extremely helpful at assisting human analysts do their job better. The caveat is that just because AI didn’t find something bad doesn’t mean it’s not there.”

When AI goes bad 

Cybersecurity is typically framed as a game of cat and mouse: Every time security pros figure out a strategy to thwart a particular attack vector, the bad guys come up with something new. Inevitably, cyber crooks and nation states will use AI to make their attacks harder to detect and defend against.

At last August’s DEF CON security show, researchers from security firm ZeroFOX demonstrated a machine-learning-based bot that distributed malware by finding well-connected targets, examining their previous tweets and hashtags, and sending tweets with links they were more likely to click on based on their past activity.

The team managed to fool more than two-thirds of the recipients into clicking on the link, which could have led to a phishing site or a malware infection if sent by a bad actor. The ZeroFOX team believes AI can generate spam that’s harder to detect and more effective than handcrafted attacks, which can work across other social media as well.

“AI is not only extraordinarily good at fooling humans, it’s also good at fooling other AI,” says Crosby. “How will we know whose AI is better—ours or theirs?”

Other researchers warn that machine learning systems could learn how to mimic someone’s writing style to convince friends and colleagues into thinking malicious links or documents are genuine. We may see AI-driven ransomware attacks that target multiple devices at once, waiting for enough systems to be compromised before extorting money from their victims. We’ve already started to see the emergence of so-called evasive malware that behaves differently depending on its environment. The code looks benign when examined by anti-malware software, but then executes its payload later, after it’s been deemed safe.

“Nothing will prevent the bad guys from using AI,” says Vectra’s Tavakoli. “They are already using machine learning to automate many of their more mundane tasks, such as looking for vulnerabilities and devising approaches that will elude detection.”

Right now, these threats are mostly theoretical. Zimperium’s Michelsen says he has yet to see any AI-generated attacks in the wild. That’s probably because the bad guys are doing just fine using existing methods.

“The same old stuff is still working—stealing passwords, simple malware, social engineering and phishing campaigns, bad discipline on users' behavior,” he says. “If the old ways still work, hackers are going to be lazy first.”

Cybersecurity = job security

According to a survey by online training firm Udemy, 43 percent of American workers are worried about losing their jobs to AI. But with an estimated 200,000 openings for information security jobs in the U.S. alone—and a projected 3.5 million worldwide by 2021—security pros won't need to worry about unemployment for some time to come.

"There are way more jobs than people in cybersecurity,” says ESG’s Oltsik. “If you're a security professional, you should be worried about improving your own skills, not that some cyborg is going to take your job. AI is something [security pros] should embrace. It has the potential to make their jobs easier and eliminate some of the burnout many of them feel."

Given the increasing rate of technological change, it’s foolish to say AI will never replace human intelligence, but the odds are still in our favor, says Charles Caldwell, vice president of customer success at Logi Analytics, a maker of embedded analytics solutions.

“AI is good at purpose-built tasks—even highly sophisticated ones, like winning Jeopardy and chess,” says Caldwell. “But AI cannot apply the discretion and creativity that we humans do. The big difference between robots and humans is that we're curious. We ask really weird, non-intuitive, sometimes silly questions, and we make serendipitous discoveries that we connect to topics that you wouldn't logically connect to.

“The advantage technology provides is speed and the ability to handle large volumes of data," he adds. "But I don’t think the machines can win alone, because in cybersecurity, you're ultimately trying to beat humans.”

AI for security: Lessons for leaders

  • You may already be using AI for security and not know it because it's increasingly baked into products.
  • AI is not a silver bullet. Experts suggest using it to automate mundane and repetitive tasks, not as a replacement for human judgment.
  • Hackers are still using old standbys—stealing passwords, simple malware, social engineering, etc. AI-generated attacks in the wild aren't (yet) common.