ChatGPT is primarily an AI language model that creates convincing texts which are difficult to distinguish from those written by humans. Hence, according to Kaspersky, cybercriminals are already trying to apply this technology to spear-phishing attacks.
The cybersecurity firm is exploring how the appearance of ChatGPT in the hands of the general public could change the established rules of the cybersecurity world. The move comes a few months after OpenAI released ChatGPT – one of the most powerful AI models to date.
Read more: Save time and effort with ChatGPT
Previously, the main hurdle stopping them from mass spear-phishing campaigns was that it is quite expensive to write each targeted email. ChatGPT is set to drastically alter the balance of power because it might allow attackers to generate persuasive and personalized phishing e-mails on an industrial scale. It can even stylize correspondence, creating convincing fake e-mails seemingly from one employee to another. Unfortunately, this means that the number of successful phishing attacks may grow.
Many users have already found that ChatGPT is capable of generating code, but unfortunately, this includes the malicious type. Creating a simple info stealer will be possible without having any programming skills at all, Kaspersky observed.
However, straight-arrow users have nothing to fear. If code written by a bot is actually used, security solutions will detect and neutralize it as quickly as it does with all past malware created by humans.
While some analysts voice concern that ChatGPT can even create unique malware for each particular victim, these samples would still exhibit malicious behavior that most probably will be noticed by a security solution. What’s more, the bot-written malware is likely to contain subtle errors and logical flaws which means that full automation of malware coding is yet to achieve.
For more on ChatGPT, click here