Share

Coming Soon: Attacks against AI

Tricking AI with manipulated data
Coming Soon: Attacks against AI
Businesses must brace for attacks against AI

Artificial Intelligence (AI) is increasingly becoming an asset in the repertoire of threat actors, who aren’t averse to using tools like ChatGPT for pulling social engineering scams as well as to craft malware.

If that wasn’t troubling enough, Acronis, in their recent cyber threats report, noted that one of the worrisome trends that’ll rule the newswires in 2023 and beyond is attacks against AI itself.

Aviv Grafi, CTO and Founder of Votiro says providers of AI tools are watching and taking note when their tech is being used for malicious purposes, and are trying their best to curb the attack vectors. For instance, he says although you can’t ask ChatGPT to write you a phishing email, it can be made to do so if you change how you word your request.

Read More: Google is releasing its artificial intelligence chatbot

“It’s a form of deception to get the technology to work in their favor,” says Grafi, adding “No matter the guardrails that are put in place by AI providers, hackers will continue to find ways to manipulate data and mislead AI models.”

Poisoning input data

 

Security experts all agree that the primary threat against AI is bad actors manipulating AI models with modified input data. For instance, by injecting biased or manipulated data sets into the training process, attackers can alter the model’s output to serve their malicious purposes.

Theo Nasser, Co-Founder & CEO at Right-Hand Cybersecurity says that they take these threats against AI models very seriously.

He stresses that issues such as data tampering and model poisoning pose threats to various industries that rely heavily on the information produced by the models. When bias is implemented in data sets it could skew important research and can create a false narrative.

“This creates a significant loss of information that can be devastating. It is important to be aware of these threats and protect these systems, and ourselves, to the best of our ability,” stresses Nasser.

Read More: ChatGPT4 is the new, improved version of its generative AI self

Maria Chamberlain, Owner of Acuity Total Solutions agrees, saying that not only can a biased dataset undermine the original motive of the machine learning (ML) model, but it can also fool algorithms into making a wrong assessment of the whole situation, defeating the whole purpose of the model.

Time to take charge

 

Our experts all view the threats against AI and ML models as severe and increasing. More so because as AI becomes more prevalent in our daily lives, the impact of a successful attack can have far-reaching consequences.

“The AI / ML vendors need to understand that they’re now a primary target for adversaries big and small. With businesses flocking to platforms like ChatGPT, the attackers see opportunity,” says Craig Burland, CISO at Inversion6.

Burland suggests that platforms like OpenAI should recognize cybersecurity as the foremost priority, especially since nothing would undermine an AI platform like news of an information breach or distortion of their underlying model.

Read More: Businesses must brace for AI-powered Attacks

Andrew Robinson, the CSO, and Co-Founder at 6clicks, also believes that threats against AI and ML models are entirely real, as they are for any systems, especially new technologies which have not yet undergone extensive security testing and cycles of improvements.

The seriousness of the threat, Robinson believes, boils down to the impact a particular model has.

“If the model is responsible for product purchasing suggestions then the impact is minimal. As models become responsible for the movement of objects in the physical world (things like cars, trucks, buses, and planes), then the seriousness is magnified by several orders of magnitude,” sums up Robinson.

The stories on our website are intended for informational purposes only. Those with finance, investment, tax or legal content are not to be taken as financial advice or recommendation. Refer to our full disclaimer policy here.