Social Engineering scams have existed long before the internet but took on a new life once people started spending more time online. Phishing is easily the oldest scam on the Internet, with every online user receiving some form of phishing at one point or another in their digital lives. As technology has evolved, phishing has evolved with it becoming more and more sophisticated to get around security controls. Users became more tech-savvy, requiring attackers to move to more sophisticated attacks and other platforms besides email such as mobile-based phishing, discord scams, social media messages, etc.
Another recent and more dangerous evolution of social engineering is AI-based scams that have leveraged this technology to create more sophisticated and realistic scams that can fool even the most tech-savvy users.
In this article, we go over this trend and the key tactics attackers use.
How AI has impacted Social Engineering
AI has been a massively disruptive force in the industry, impacting nearly every sector, and cybercrime has been no different. Cyber Criminals have quickly recognized the raw potential of using AI in their schemes. Tell-tale signs of phishing, such as typos and mistakes, can easily be avoided using AI, and specially crafted emails can be generated quickly with the proper tool.
One of the most dangerous applications of AI for scamming and fraud has been the usage of Deepfakes scams. This AI-driven technology allows a person to impose or use an existing person’s voice or image over something else in a new, dangerous type of identity fraud. With Machine Learning algorithms powering Deepfake, it can be extremely difficult to identify what is real and fake in these scams.
Platforms like YouTube are already filled with Deepfake videos showing likenesses of famous personalities being used in videos in which it is nearly impossible to tell the difference between Deepfake and an actual legitimate video. This is a goldmine for attackers who can apply the same technology for malicious purposes.
For example, attackers could use this technology to impersonate a senior executive’s office to commit financial fraud. By instructing someone junior to them using the executive’s voice, fraudulent transfers could be carried out with no one the wiser. Similarly, attackers could substitute their face and voice with a legitimate employee’s voice and image to access sensitive information. This is especially dangerous in the era of remote work, where it is possible for an employee and hiring manager not to meet for months at a time!
What makes Deepfakes so dangerous
Deepfakes blur the line between reality and illusion, making them especially dangerous in social engineering situations. Cybersecurity teams educate users to look for the tell-tale signs of a social engineer. Still, if the scam seems to be coming from a trusted individual, it becomes extremely difficult to ascertain whether it is fake or real. Similarly, security products rely on detecting malicious patterns, and attacks focused on human perception will quickly fly under the radar of such tools.
Deepfake technology is also becoming increasingly accessible to the average user putting it in the hands of cybercriminals across the globe. Unfortunately, this threat is not just in theory, as several attacks have already occurred, showing the growing popularity of this threat vector. Criminals have started using deepfake scams in tandem with stolen identity documents to pass job interviews and get jobs at companies to gain access to sensitive information. The risk of these attacks was severe enough for the FBI to issue an advisory on the same, warning companies about this new threat stating, “The FBI Internet Crime Complaint Center (IC3) warns of an increase in complaints reporting the use of deepfakes and stolen Personally Identifiable Information (PII) to apply for a variety of remote work and work-at-home position.”
How to protect against such scams
Deepfake scams are here to stay, and cybersecurity teams must upgrade their training programs to inform users about this new threat. Users and senior executives should be trained about these scams and how to identify if a request seems suspicious, even if it seems to be coming from a genuine source.
In addition to awareness, AI-based security solutions should be invested in identifying when deepfake content is being used due to patterns in audio or video streams. Such security solutions will become a standard part of any cybersecurity framework going forward as the industry matures and these attacks become more and more common.
Conclusion
The new age of technology has dramatically increased the sophistication of social engineering attacks and requires new types of cybersecurity controls to combat. Deepfake is not a new technology; however, its growing accessibility has evolved it from a harmless pastime on social media to a dangerous new cybercrime tool.
The days of standard email-based phishing attacks are far behind us as we enter a new era of social engineering scams powered by AI tools. Old methods of detecting such attacks will become obsolete as cybercriminals move away from text-based attacks to attacking how we perceive other people. Cybersecurity teams must understand these new risks before their companies are targeted and implement a holistic cybersecurity strategy based on technical controls and awareness.
FREQUENTLY ASKED QUESTIONS
What are some examples of deepfake scams?
Real-time deepfakes have been used to trick grandparents into sending money to simulated relatives, to secure jobs at tech companies to gain inside information, and to deceive individuals into parting with large sums of money. A recent scam highlighted by the FBI involved the use of deepfake videos during job interviews for tech positions, with the scammers misrepresenting themselves as applicants for remote jobs.
What was the FBI’s response to deepfake scams?
The FBI issued a warning in response to an increase in complaints about the use of deepfake videos during job interviews, particularly for tech jobs involving access to sensitive systems and information. The Bureau reported that the scam had been attempted on jobs for developers, database, and software-related functions; some required access to customers’ personal information, financial data, large databases, and/or proprietary information.
How prepared is society to handle the threat of deepfake scams?
Despite the emerging tools to detect deepfakes, society is not fully prepared to handle this threat. These tools are not always effective and may not be accessible to everyone. The sophistication of deepfake technology, combined with the difficulty of detection, highlights the need for further research and development in effective countermeasures to combat these sophisticated scams.
What are deepfakes, and how do they relate to AI-based scams?
Deepfakes are simulations powered by deep learning technology, a form of AI that uses vast amounts of data to replicate something human, such as a conversation or an illustration. In the context of AI-based scams, deepfakes can be used in real-time to replicate someone’s voice, image, and movements in a call or virtual meeting, thereby deceiving victims into thinking they are interacting with a real person.