A special report from   |  With support from

Harnessing Technology

Will AI Embolden the Criminals or Can It Help Us Thwart Cyber Threats?

Would-be hackers — especially those backed by or working for adversarial countries — are learning how to amplify their attacks through machine-learning and artificial intelligence (AI).

Backdoor poisoning attacks are on the rise as cybercriminals inject contaminated information into the data used to train machine-learning models to respond to intrusions. If these hackers are successful, they can throw defenders off track when a real threat is launched.

Impersonation attacks using voice and imaging AI, or deep fakes, are also growing. In one audacious scheme, a subsidiary leader who thought he was speaking to his parent company’s CEO on the phone was scammed into  transferring nearly a quarter-million dollars. The proliferation of face changing technology similarly poses a threat as phony images  have the potential to gain entry into smartphones and other recognition-protected platforms.

Nevertheless, the surge of cyberattacks alongside a cybersecurity workforce shortage is forcing cybersecurity operations centers to find new ways to fight back. Many are turning to AI for help.

The use of AI to protect against cyber threats began about five years ago, says Jeff Johns, vice president for data science and engineering at the cybersecurity firm FireEye.

AI can process huge amounts of data faster and at lower cost than mere mortal defenders. And as machine-learning programs sift through data and continually refine their algorithms, AI automatically becomes better able to respond to threats.

“We don’t have the manpower to handle all the data we have,” Johns says. “Machine learning is the most promising tool in the toolkit to solve that problem.

AI’s growing prominence in digital defense comes amid projections of a global cybersecurity workforce shortfall of 1.8 million by next year. A 2019 survey of 850 senior executives from IT information security, cybersecurity and IT operations in seven sectors across 10 countries found nearly three-quarters testing the use of AI for cybersecurity. The Cybersecurity Tech Accord suggested in a 2019 paper on closing the skills gap that AI and machine learning could “concretely contribute” to a safer cyberspace.

That said, AI is still in its infancy, and most organizations mainly use it to detect problems rather than solve them. Solving problems remains the job of human analysts, at least until machines get smart enough to go on the offense themselves.

“We don’t have the manpower to handle all the data we have. Machine learning is the most promising tool in the toolkit to solve that problem.”

—Jeff Johns

“The biggest thing with AI is it’s a force multiplier for humans,” at a time when malicious attacks number in the trillions, says Jake Williams, chief technical officer for the cybersecurity company BreachQuest. “We need automated systems to really counter the threat at the speed and scale we’re seeing today.”

Cybersecurity analysts know they are in an arms race with hostile players as each one tries to leverage the promise of AI against the other. By freeing analysts to focus on and respond quickly to only the most serious threats, AI increases the odds of identifying and shutting down malware, ransomware, phishing and other attacks before they do real harm.

“Machine learning is not the panacea of cybersecurity,” Johns says. “But AI remains the top area of growth” in a never-ending battle against hackers. —By Andrea Stone

A special report from   |  With support from