In their recent cyberthreats report, Acronis predicts that artificial intelligence (AI) and machine learning (ML) will help fuel identity fraud and disinformation campaigns in the not so distant future.
Christopher Hills, Chief Security Strategist, BeyondTrust believes that while AI isn’t yet capable enough to learn and replicate human behavior, recent advancements have put it to interesting use cases such as correctly predicting medical conditions based on symptoms. “Granted this is a good thing, but in the hands of a threat actor, AI could easily be leveraged for nefarious purposes,” warns Hills.
Read More: Cybercriminals using ChatGPT to launch spear-phishing attacks
Greg Hatcher, CEO at White Knight Labs agrees saying that the abuse of AI by threat actors for nefarious purposes is a growing concern; especially since the advent of ChatGPT.
AI-powered attacks
According to our experts, one of the leading cases for threat actors to abuse AI is for crafting personalized phishing emails that mimic the writing style of the target’s acquaintances, making it more convincing and difficult to detect.
Phishing is already a big problem in the region. According to Acronis, Kuwait already leads the top 10 countries in the EMEA with the most blocked URLs in November 2022, followed by Saudi Arabia with Kingdom of Jordan at the 8th spot and UAE and the 10th.
Aviv Grafi, CTO and Founder of Votiro shares that while phishing has been a favorite vehicle for threat actors to inject malware into a company’s network, in recent years, their success rate has fallen thanks to a combination of factors such as phishing awareness training combined with some form of antivirus or sandboxing technologies to block potential malware attempts.
“Now with the introduction of AI, hackers are now at an advantage. Using AI chatbots, like ChatGPT, bad actors can more easily create convincing, and grammatically correct, language for phishing attacks,” says Grafi.
Building on this Hatcher says AI is incredibly useful for non-English speakers that need to craft a phishing email using a certain tone of voice: “The age of phishing emails that have misspelled words and incorrect grammar are over. Goodbye King of Nigeria.”
Read More: Ransomware attacks could cause $30 bn in damages in 2023
Our experts also expect the next generation of social engineering attacks to be AI driven, with the technology being misused to create convincing audio and video deepfakes.
Iu Ayala Portella, CEO and founder of Gradient Insight, believes going forward attackers will use AI to create synthetic identities that can be used to conduct fraudulent activities, spread disinformation, and manipulate public opinion.
“For instance, attackers can use AI-powered voice cloning tools to create realistic audio impersonations of individuals, such as politicians, business leaders, or celebrities, to spread fake news or even influence election results. Similarly, AI-powered image manipulation tools can create convincing fake videos or images of people that can be used to blackmail, extort, or deceive individuals,” shares Portella.
Onslaught of deepfake
Theo Nasser, Co-Founder & CEO at Right-Hand Cybersecurity says that using deepfakes attackers will be able to create convincing and realistic phishing emails and texts resulting in complex threats that, to the untrained eye, are extremely realistic and believable.
Hatcher points to an example of an incredibly successful deepfake scam from 2020 where a bank manager in Hong Kong was tricked by criminals into transferring $35 million to a bank account the criminals controlled. The criminals used ‘deep voice’ technology to spoof the voice of the director at a company the bank manager was familiar with. The criminals knew that the company was about to make an acquisition and that they would need to initiate a wire transfer to purchase the other company. The criminals timed the attack perfectly and the bank manager transferred the funds, narrated Hatcher.
Read More: Six global cyber extortion trends observed around the world
In addition to super-powering the traditional attack vectors, Grafi says novice hackers can also use AI tools to help craft malicious code or malware that they may not have been able to create themselves.
“They may not necessarily be successful as the malware it generates is pretty basic, but it will allow them to play around with the technology and try to advance their skills,” explains Grafi.
Malware too is a big concern in the region, with Jordan emerging as one of the most-attacked countries, in terms of malware per user, in Q3 2022, as per Acronis.
Compounding the problem, says Grafi, is the fact that AI is also enabling threat actors to generate malicious images and videos that are embedded with malware. This is dangerous as most companies that leverage antivirus or sandboxing technologies are not well equipped to detect threats embedded within images and videos.
But all’s not lost
Andrew Robinson, CSO and Co-Founder at 6clicks, agrees that like any new technology, AI and ML will also have vulnerabilities that will be exploited, and that nascent AI and ML technologies lower the barrier to entry for attackers in the same way that they lower the barrier to entry for general software development.
However, despite the opportunity for misuse, he is confident that AI and ML will finally help tip the scales in favor of the defenders.
“I am optimistic that on balance AI and ML technologies offer the defender the ability to detect and respond faster than ever before and gain a considerable advantage over attackers — once the technologies mature — in what has previously been considered a never-ending arms race,” reasons Robinson.