Cognitive Times

Your source for news and thought on artificial intelligence, the Internet of Things, big data, and other technology-related fields.


Address: 4030 W. Braker Lane
Building 5, Suite 500
Austin, TX 78759
Phone: 844.205.7173

Why Hacks Still Happen: The Future of Cybersecurity

A new era of cybersecurity is dawning. AI and machine learning technologies are changing the way enterprises protect their systems and data.

With hyper-intelligent machines on our side, one could easily imagine that organizations must feel invincible.

Yet fears of devastating cyber attacks seem to be higher than ever, and not without cause. Just in the past year, major businesses and critical infrastructure have seen massive disruptions by infections such as WannaCry, NotPetya, and Bad Rabbit.

We can’t understate the impact of these attacks.

WannaCry shut down huge swathes of businesses and hospitals in 150 countries for several days, and estimates of its total fiscal damage range from millions to billions of dollars.

NotPetya crippled vast portions of Ukraine’s infrastructure, including government, banks, utilities, transportation, and even the radiation monitoring system at Chernobyl.

Despite the incredible new technology at the fingertips of organizations, cyber attacks seem to be growing more frequent in their occurrence and more cataclysmic in their effects.

Why? If everyone has AI patrolling their networks and endpoints, then what is going wrong?

The Many Faces of AI

AI cybersecurity solutions aren’t as widespread as you might think. Almost every anti-virus has “AI” stamped on it somewhere, but that’s no guarantee of its efficacy.

The majority of “AI” cybersecurity products currently on the market are either legacy cybersecurity methods supplemented with AI or AI supplemented with legacy methods.

Neither of these have the capabilities of cybersecurity built entirely from AI, nor many capabilities beyond a typical next-generation anti-virus without AI.

For every company that builds and designs its cybersecurity solutions from the ground up with AI, dozens more are just looking to shoehorn AI into their existing product suite for marketing purposes.

In general, AI cybersecurity falls into three categories:

  1. Legacy vendors who build AI models to supplement their preexisting software
  2. Next-generation antivirus vendors who build AI-based endpoint detection but are unable to build the AI to be effective enough on its own, and so they must reinforce the software with legacy and rules-based approaches
  3. Genuine AI-driven approaches built by specialists that operate without the use of rules or heuristics

The first and second categories of vendors are far too common and unable to capitalize on the true potential of AI.

The Fault in our Software

For instance, one common approach is to feed the AI algorithm human-defined parameters, rather than allowing the machine to determine what’s significant.

This eliminates many of the advantages of AI, as these anti-viruses are vulnerable to the same problems affecting traditional cybersecurity for so long: reliance on flawed human biases, difficulty scaling, and lack of flexibility.

They will be no more capable of catching zero-day malware or slightly modified polymorphic malware than any other software.

Another frequent offender is signature-based cybersecurity products in which an AI algorithm is fed files and creates its own signatures. This feature will most certainly be marketed as “AI,” but it’s still signature-based cybersecurity that runs into the same problems as its legacy counterpart.

It either encounters a high rate of false positives while missing novel threats or it ‘s trimmed back until it’s incapable of detecting nearly enough threats. AI will not fix this lack of granularity.

This is just one example of a common problem with “AI” cybersecurity — too many of them are simply using AI to make a broken or incomplete process faster, but not better.

This activity deviates these products away from true machine learning, whose vastly improved outputs are the result of a fundamentally different process.

It’s as if a manufacturer made a particularly stylish, high-tech bicycle and called it a car. Genuine AI really does revolutionize cybersecurity, but in searching for the right platform, caveat emptor.

Furthermore, the wider availability of AI and machine learning tools means that hackers and criminal organizations also have AI, and they’re using it to develop newer, smarter cyber attacks.

The only way for organizations to counter AIbased malware is by leveraging AI themselves, and half-baked imitations won’t protect against attacks making use of superior technology.

How Do You Separate AI from its Copycats?

If you’re looking to implement AI to safeguard your enterprise against cyber threats, how can you spot genuine information? What should you look for when choosing an AI cyberŠsecurity solution?

The prospect of wading through the swamp of marketing halfŠtruths feels daunting to most, to the point where many give up and abandon AI. But everyone can and should avail themselves of AI’s powerful abilities to protect against cyber attacks.

Finding the right solution doesn’t have to be hard.

Here are the key questions to ask about any cybersecurity product before buying.

1. Does the solution employ rules-based engines as part of the product?

If the answer is yes, that’s a red flag.

Rules-based engines is just a human legacy approach automated by AI. This method is like fitting a square peg into a circular hole. Once the peg is small enough to fit, only a tiny fraction of the hole is covered.

2. How does the detection engine adjust to new threats?

SparkCognition’s own research finds that the threat detection capabilities of market-leading software drops an average of 24% on new malware discovered in the last 24 hours.

That’s because these products rely on signatures and file reputation databases that don’t update in time to catch zero-day malware.

This gap in protection does not occur in genuine AI solutions, which can catch novel threats without referring to databases or signatures.

3. Does the AI cover multiple threat vectors, including documents and scripts, or is it limited to executables and documents with embedded executables?

Many next-generation anti-virus programs use AI to detect executables but resort to rudimentary legacy approaches for documents, scripts, and other attack vectors.

Vendors attempt to patch this weakness with macro control and script control, which force the user to turn macros or scripts o or on. However, this command fails to distinguish between malicious and benign macros or scripts, inadvertently blocking o an entire set of tools that users might need.

It’s like if a cybersecurity product advised the users to simply keep their computer turned off at all times. After all, a device isn’t hackable if it’s not on!