2020 Vision: Artificial Intelligence and Cybersecurity

Published
29 min read

IN MANY WAYS, cybersecurity has never been more mainstream. Acronyms like “2FA” no longer befuddle people, and even the least tech savvy among us now regularly leverage techniques like biometric authentication. In the public limelight, the subject of artificial intelligence and cybersecurity has come up again and again. Governments have held highly visible hearings outlining sophisticated online influence campaigns and targeted email breaches. Hollywood hackers now operate within reality, too, as shows like Mr. Robot consult with industry-leading security firms and former intelligence officers to get the tiniest Kali Linux details just right.

Then again, talk to cybersecurity professionals, and many will quickly point out that in the new decade ahead, there’s still a long way to go. Reports of ransomware attacks span industries and municipalities of all sizes. Individuals leading federal cybersecurity and IT initiatives seem to routinely make the most basic of operational security fails, from forgoing encrypted email platforms to bypassing password protection on their phones. Cybersecurity remains an arms race where white hat defenses constantly respond to never-before-seen attacks, tools, and techniques from black hat attackers.

“You watch that movie Men in Black, and Tommy Lee Jones is talking to Will Smith. Will Smith says, ‘Man, we were about six seconds from annihilation.’ And Tommy Lee Jones replies, ‘We’re always six seconds from annihilation,’” says cybersecurity industry veteran Greg Fitzgerald, chairman of managed security service provider Cyberforce Security. “That’s exactly the truth for those in the cyber defense industry.”

As 2020 gets underway, the next ten years promise to only complicate matters when it comes to cybersecurity. The challenges we as individuals or organizations face will be different than they were at the start of 2019, let alone 2016 or 2010. What are the initiatives keeping cybersecurity pros up at night as they think about the future? And what should cybersecurity best practices look like from here on out?

THE CHALLENGES AHEAD

Automated pwn-age

In the 2020s, the soft spot for security systems of any scale will remain the same as it’s been for decades. “The weakest point in any cybersecurity system is still the human element,” says Dr. Roman V. Yampolskiy, director of the CyberSecurity Lab at the University of Louisville and a widely respected researcher specializing in AI and cybersecurity. “I work with a large administrative staff, and many will click anything. If it’s a well-designed spearphishing email, it looks like it’s coming from your wife, your boss, your friend—and you’re clicking it.”

Spearphishing may be one of the oldest types of cyberattacks—who hasn’t gotten a note from a Nigerian prince in need of a money wire at this point?—but Yampolskiy brings it up first when discussing his biggest cybersecurity concerns for the next decade. Why? It turns out these simple emails are about to get even simpler for would-be attackers.

In the past, even basic attacks like spearphishing required a substantial effort from bad actors. To effectively scam someone via spearphishing, for example, hackers might first develop a profile of their target through simple Google searches or social media stalking. They’d have to similarly seek out a company email address. And from there, attackers then had to send all these individual emails and monitor possible fish one by one, hoping to get a respondent from which they could then solicit a quick financial score. It took time, some degree of technical know-how, and sustained effort.

But in 2020 and beyond, the barrier to entry for things like spearphishing will go way down—thanks to artificial intelligence. “My biggest concern for the next decade is we’ll switch from where the bad guys are guys to where it’s artificially intelligent malware,” Yampolskiy2020 Vision: Cheers to Artificial Intelligence and Cybersecurity tells Cognitive Times.

“[AI technology] has been getting much smarter, and now it’s possible to automate social engineering attacks, for instance. [Spearphishing] used to take a couple of hours at least,” he continues. “Now you can have a system do it to millions of people, billions even, instantaneously … [AI] is basically software with the ability to perform work. So any type of fake dating, transfers of money … there are no limits to the kinds of behavior you can automate on a large scale.”

That access to automation is something Cyberforce’s Fitzgerald has increasingly seen, too. He’s noticed artificial intelligence seeping into everything from credit card skimming—services now exist where you can buy stolen data and a list of emails, rent automated processing power to see what’s valid, and then turn around and sell that info on the black market—to ransomware.

“Right now [on the black market], you can rent a ransomware attack, you can buy a series of IP addresses they have where you don’t even know who’s being hit, and then you can buy a service to send communications that say ‘Hey, pay us,’” he tells Cognitive Times. “Today, you can do that without any IT skills. You just need a moral compass that’s wrong and an idea of where to find it.”

Deep fakes

Simple uses of artificial intelligence to increase access to hacking tools isn’t the only concern that came up repeatedly when looking to the next decade. Artificial intelligence is also being leveraged in more complicated ways with potentially nefarious applications—like the much-discussed case of deep fakes.

A deep fake at its core is manipulated media—audio spliced together to create a clip of someone saying something they never said or videos being doctored to do the same—but the “deep” part references machine learning and artificial intelligence being used behind the scenes. Though we haven’t yet gotten to the point where this technology is perfected, let alone accessible, even simplistic doctored media can be weaponized. A rudimentary fake depicting Nancy Pelosi slurring her speech, for instance, made the rounds across social media this spring after some amateur hacker slowed down her audio to make the congresswoman appear drunk. And while the video wasn’t particularly convincing to close watchers, even badly executed deep fakes can do real disinformation damage. It is, after all, an era of siloed social media, with audiences hungry for conspiracy, and repeated dismissals of unflattering facts as “fake news.”

But thanks to artificial intelligence and machine learning, researchers are truly advancing the ability to generate entirely computer-made audio and video that looks and sounds like the real thing. Engineers at Toronto-based Dessa, for instance, were able to train an algorithm on audio from the podcast The Joe Rogan Experience and credibly re-create the host’s voice saying whatever they wanted. On a recent episode of The New York Times’ “The Weekly” program, Dessa showed the publication just how far its efforts to do the same for a video version of Rogan have come. It could certainly fool someone only moderately familiar with the podcast host.

So while deep fakes are not yet perfect, they’re getting closer—and have already been effective. Yampolskiy pointed to one recent cautionary tale at a UK-based energy firm, where a CEO was scammed out of nearly $250,000 by an audio-deep fake impersonating his boss over the phone.

“For every technical capability, you can use it for good and bad,” Yampolskiy adds. “[But with deep fakes], it’s actually worse than you think—most people think we can develop analytical or statistical tools to combat that. But here’s the problem—let’s say you do that, and you have a beautiful forensic tool that says, ‘This is a deep fake.’ Well, we have people who think the world is flat. Will you convince them with that tool?

“We have a 24-hour election cycle. If video of a presidential candidate doing God knows what comes out, and 24 hours later someone says, ‘Well, statistically speaking, this is fake.’ People saw it with their own eyes, it doesn’t matter. You need human psychology to be addressed, not statistical analysis.”

AI—without control

With both automated attack tools and nefarious deep fakes, bad actors are intentionally using artificial intelligence and machine learning towards some evil ends. But another major cybersecurity concern for the 2020s is the unintentional outcomes of AI. After all, if every industry from consumer tech to automotive to private space is rushing to leverage AI in order to advance their products and simplify the back end, what happens when those AI systems behave in ways or produce outcomes that we don’t expect?

2020 Vision: Toast to Artificial Intelligence and Cybersecurity“If you’re developing AI systems, are you at all concerned about what happens when you succeed? Most companies don’t have an AI safety expert, they say, ‘Let’s see what happens when we deploy it,’” Yampolskiy says. As part of his research at Louisville, the professor has been collecting examples from a variety of industries of AI systems failing. (“There are hundreds of ‘em,” he says. “Not futuristic Terminators; it already happened.”) And while some tragic outcomes in high-risk AI systems can be predicted—see the deadly car accident involving Uber’s autonomous vehicle program in 2018—even seemingly benign uses of AI can end up producing horrific results.

One high-profile example is Microsoft’s AI-enabled chatbot, Tay. In 2016, Microsoft released Tay across select forms of social media so it could interact with users and evolve (while also helping Microsoft gather data to improve the ability of its smart online systems to understand natural language). But within a matter of days, the bot was spewing Nazi-isms and dealing in then-candidate Donald Trump propaganda. Microsoft quickly pulled it and hasn’t put out a Tay 2.0 since.

While Tay’s malfunctions did little tangible damage in retrospect, there are many more high-stakes use cases for AI today: military vehicles and weapons; automobiles; even cybersecurity itself. If any of those systems had a Tay-style meltdown, the ramifications could be much more grim.

“AI will impact everyone, every industry, and be in every possible use,” Yampolskiy says. “We’re just limited by how creative we can be. And as the systems become smarter, they’ll get to human level and beyond—we cannot predict this. It’s impossible. And if a system becomes smarter than you, we can’t possibly predict what it’s going to do. We didn’t consider all the possibilities of Facebook; we cannot predict all the capabilities here.”

THE SOLUTIONS

Faced with futuristic-sounding issues rapidly becoming present-day concerns, what can individuals and organizations do to combat the next generation of cybersecurity challenges? Talking to cybersecurity experts at SparkCognition’s 2019 Time Machine artificial intelligence conference, good defense still starts with the basics.

On an individual level, be aware of some of the most common types of malware campaigns—not just tomorrow’s deep fakes, but today’s spearphishing attacks or ransomware campaigns asking for cryptocurrencies—so you’re more likely to recognize such attempts in the future. Good password hygiene can also mitigate the impact of folks accessing your accounts or obtaining your information. “I’d encourage individuals to use well-encrypted software to manage passwords and to create passwords that are completely randomly generated,” Yampolskiy says. “I have no idea what my passwords are—you can torture me and I can’t give them to you, because I literally don’t know what they are. So whatever I have on my crypto stays safe as long as my master password is not exposed and hackers don’t have access to my hardware directly.”

Cyberforce’s Fitzgerald adds that it’s perfectly acceptable to live a tech-filled life despite all the looming threats to come—just make sure you use common sense if doing so. “People ask me all the time, ’Should I just not use or do anything?’ Well, I bank online, I have my Alexas all over, and I do social media. You just have to live life. Take precautionary steps, but it’s a risk/reward thing,” he says. “Enjoy the benefits of tech, but know also that technology exists to protect the devices you have at the best of their ability. So, make sure you have something and practice situational awareness—just like if you were walking through a bad part of town, you need to do that for your normal tech parts of life. Hopefully it’ll become second nature. ’Oh, I’m on a public Wi-Fi? Let me click the button with the VPN, now there’s not a man-in-the-middle and I can get to email.’”

For organizations, start by ensuring there’s a base level of security expertise in-house. “If you’re just doing software, you better have someone who’s doing cybersecurity,” Yampolskiy says. “And if you’re doing AI, you probably don’t have someone thinking about AI safety yet.” Having dedicated and informed staff around can prevent an attack or AI malfunction from snowballing. Fitzgerald notes dedicated cybersecurity professionals are more likely to be aware of cutting-edge security offerings from security startups like SparkCognition (he cites its recent endpoint tech as a useful tool) rather than relying on big box solutions from the likes of Cisco that may not be the right fit to fight cutting-edge threats. The past summer’s rash of ransomware targeting U.S. schools, city governments, and hospitals succeeded, at least in part, because those organizations didn’t have enough dedicated IT personnel to perform even simple security practices like maintaining backups of critical data and systems.

From there, try to be realistic and self-aware about what aspects of your organization might be appealing to hackers and focus your resources accordingly. Fitzgerald points to a recent case where hackers interested in images of teens were targeting school security cameras in order to obtain live footage, for instance. Having a threat model—a formal analysis of your organization’s biggest risks or weak points in order to identify likely threats—is useful and not only the province of big government contractors. “It really is impossible to cover every door, window, entry, or exit of an organization, so the key is to understand where you’re most vulnerable and what key assets are that could be most exposed—a database, a grading system, a destination that might be attractive to attackers,” Fitzgerald says.

But ultimately, the most effective advice for cybersecurity hasn’t changed in the last decade and it likely won’t in the 2020s: the offline world often remains your safest bet. “I don’t think you can successfully prevent all of the attackers, so it’s more about not putting things online that you don’t want someone to find there,” Yampolskiy says. “Don’t post anything you wouldn’t want to be public—that’s the best strategy you can have. Keep it offline.”

The Future of Autonomy in CombatPrevious ArticleThe Future of Autonomy in Combat The Democratization of Data: Modern AI and Automated Machine LearningNext ArticleThe Democratization of Data: Modern AI and Automated Machine Learning