There’s a lot of fear mongering about AI, and how it could supercharge cyberattacks like never before. In a recent speech, US Vice President Kamala Harris, went as far as to refer to AI as an existential threat.
But is the AI threat as menacing as it is often made out to be, or is it just overhyped buzz?
Christian Borst, CTO EMEA, Vectra AI acknowledges that we are facing several existential threats, like climate change. But he’s not sure if AI is one of them.
Read: The rise of generative AI: A threat to the metaverse?
“I would argue that cognitive biases are influencing our judgment one way or the other,” says Borst, “which is why I believe we urgently need to gain a broader knowledge of AI; its opportunities and risks.”
Hesitant assistant
So what opportunities does AI provide threat actors?
Borst says that cybercrime, for most part, is a money-making business. Bad actors will use anything that helps increase their revenue.
“And in the past, we have seen that often they are the early adopters of technology, because they don’t have to overcome complex organizational boundaries, and are usually less risk averse than their targets,” he explains.
Importantly, he says that AI helps reduce the barriers to entry, not just for legitimate users and developers, but also for illegitimate uses. And generative AI can significantly increase not only the quality, but also the scalability of social engineering attacks.
Read: Big Tech tightens its chokehold on generative AI
Using AI to craft such attacks, relieves threat actors from building skills in language, programming and persuasion, says Borst.
“While most AI will not write a computer virus when explicitly asked, generative AI systems will absolutely write software that can be used as components in a malicious attack,” says Morey Haber, Chief Security Officer at BeyondTrust.
For instance, ChatGPT won’t write a virus. But it’ll easily create a basic PowerShell script to disable all mailboxes on a Microsoft Exchange Server.
“The subtle difference between asking for a virus and a potentially useful script that can be abused will allow threat actors to rapidly develop payloads with a high degree of accuracy to conduct their nefarious attacks,” says Haber.
Supercharged deceptions
A genuine AI threat is the use of the technology to amplify the scale and complexity of cyberattacks.
“[Threat actors] use AI’s capabilities to automate and refine their strategies, crafting convincing phishing emails, sophisticated deep fake videos, and stealthy malware that sneaks past typical defenses,” explains Andrea Marazzi, CCO of New Native.
Read: Coming Soon: Attacks against AI
This approach, he explains, allows attackers to adapt their methods easily. This presents a challenge in identifying and mitigating these evolving cyber threats.
Bundeep Rangar, CEO of Fineqia agrees. He says the real danger is the increased speed of cyber attacks thanks to AI-driven precision and automation.
“The use of AI, particularly Large Language Models (LLMs), poses a new danger by making it easier to create convincing and deceptive content,” says Rangar. “It enables the creation of highly personalized [attacks], which can confuse people and [further[ blur the lines between what is real and what is manipulated.”
Good guy AI
However, threat actors aren’t the only ones using AI. Borst says a wide range of AI techniques are applied in the domain of cybersecurity as well.
Marazzi adds that one popular use is to employ AI to proactively identify network irregularities and predict potential threats.
Read: UAE companies boost AI investment to strengthen cybersecurity
“AI is successfully being incorporated into cyber security tools to provide a level of recommendations and decisions never seen in previous solutions,” says Haber.
He illustrates this with the example of a leading Cloud-Native Application Protection Platform (CNAPP) vendor. This vendor has added generative AI recommendations to common security weaknesses, like an open S3 bucket. Following these, users can fix the flaw based on common knowledge, without coding a hard response or rule.
Rangar adds to this saying that cybersecurity providers, such as Darktrace and Crowdstrike, use AI to strengthen their defense mechanisms. They use predictive algorithms to analyze network behavior and detect any anomalies, to help identify potential threats quickly.
“Furthermore, AI-driven autonomous systems can adapt to changing attack patterns in real-time, providing robust protection to individuals and organizations against emerging cyber threats,” assures Rangar.
For more stories on tech, click here.