Friday, November 22, 2024

RSAC 2024: AI hype overload

Digital Security

Can AI effortlessly thwart all sorts of cyberattacks? Let’s cut through the hyperbole surrounding the tech and look at its actual strengths and limitations.

RSA Conference 2024: AI hype overload

Predictably, this year’s RSA Conference is buzzing with the promise of artificial intelligence – not unlike last year, after all. Go see if you can find a booth that doesn’t mention AI – we’ll wait. This hearkens back to the heady days where security software marketers swamped the floor with AI and claimed it would solve every security problem – and maybe world hunger.

Turns out those self-same companies were using the latest AI hype to sell companies, hopefully to deep-pocketed suitors who could backfill the technology with the hard work to do the rest of the security well enough not to fail competitive testing before the company went out of business. Sometimes it worked.

Then we had “next gen” security. The year after that, we thankfully didn’t get a swarm of “next-next gen” security. Now we have AI in everything, supposedly. Vendors are still pouring obscene amounts of cash into looking good at RSAC, hopefully to wring gobs of cash out of customers in order to keep doing the hard work of security or, failing that, to quickly sell their company.

In ESET’s case, the story is a little different. We never stopped doing the hard work. We’ve been using AI for decades in one form or another, but simply viewed it as another tool in the toolbox – which is what it is. In many instances, we have used AI internally simply to reduce human labor.

An AI framework that generates a lot of false positives creates considerably more work, which is why you need to be very selective about the models used and the data sets they are fed. It’s not enough to just print AI on a brochure: effective security requires a lot more, like swarms of security researchers and technical staff to effectively bolt the whole thing together so it’s useful.

It comes down to understanding, or rather the definition of what we think of as understanding. AI contains a form of understanding, but not really the way you think of it. In the malware world, we can bring complex and historical understanding of malware authors’ intents and bring them to bear on selecting a proper defense.

Threat analysis AI can be thought of more as a sophisticated automation process that can assist, but it’s nowhere close to general AI – the stuff of dystopian movie plots. We can use AI – in its current form – to automate lots of important aspects of defense against attackers, like rapid prototyping of decryption software for ransomware, but we still have to understand how to get the decryption keys; AI can’t tell us.

Most developers use AI to assist in software program development and testing, since that’s something AI can “know” a great deal about, with access to vast troves of software examples it can ingest, but we’re a long ways off from AI just “doing antimalware” magically. At least, if you want the output to be useful.

It’s still easy to imagine a fictional machine-on-machine model replacing the entire industry, but that’s just not the case. It’s very true that automation will get better, possibly every week if the RSA show floor claims are to be believed. But security will still be hard – really hard – and both sides just stepped up, not eliminated, the game.

 Do you want to learn more about AI’s power and limitations amid all the hype and hope surrounding the tech? Read this white paper.

Related Articles

Latest Articles