The rise of ChatGPT, an AI-powered chatbot, has gained a lot of attention. It is developed by OpenAI and can answer any question. However, there are concerns about the risks associated with its popularity. Cybercriminals are creating fake versions of the official site or app to distribute malicious content. The real danger is the possibility of spear phishing attacks using this chatbot. These attacks target individuals by using the personal information they share on social media and online.
Spear phishing attacks are a growing threat. Attackers use ChatGPT to create deceptive content that exploits the information individuals unknowingly disclose through their online activities.
To combat this trend, Ermes – Cybersecurity, an Italian cybersecurity firm, has developed an AI system that filters and blocks the sharing of sensitive information like emails, passwords, and financial data.
One specific concern is the use of ChatGPT for Business Email Compromise (BEC) attacks. Cybercriminals use templates to create deceptive emails and trick recipients into sharing sensitive information. With ChatGPT’s assistance, hackers can generate unique content for each email, making it harder to detect and differentiate from legitimate correspondence.
The flexibility of ChatGPT allows attackers to make various modifications to their prompts, increasing the chances of success. This raises serious concerns about the potential misuse of advanced AI technologies in cybersecurity. Users and organizations need to remain vigilant against evolving threats.