Cybercriminals are Using AI in Scams. Organizations Need to Prepare for Additional AI Cybersecurity
Cybercriminals are using AI to make their scams more convincing and sophisticated than ever. The result is a slew of new AI cybersecurity threats to both businesses and consumers.
In January, Forbes reported that several major hacking communities were using ChatGPT to develop malware that could spy on users’ keyboards. Fraudsters are also using AI tools to simulate voices in imposter scams; NBCNews reported an incident in March, where a man received a phone call from his daughter, who said she was being held hostage. The daughter’s voice was actually a replication created by AI.
While ChatGPT and other AI tools do pose cybersecurity risks for businesses and consumers, it also offers many benefits including increased efficiency, improved data monitoring, and reduction of human error. Companies see the potential and are exploring ways to integrate these tools into their business practices, and ways to balance the benefits and risks of this technology.
As cybercriminals are using AI to enhance their scams, organizations want to know what they need to do to prepare for bad actors, and the potential negative uses for AI tools. Doriel Abrahams, U.S. Head of Risk at Forter, shares his thoughts about what AI cybersecurity trends will play out this year.
Doriel’s Thoughts
“With the rise of ChatGPT and its new iteration ChatGPT-4, there are a lot of interesting and important conversations to be had surrounding generative AI. One of those is about potential use of those tools in fraud. The reality is that online crime has one weakest link and that link is human intervention.
ChaptGPT, or other tools, can manipulate and prey on human emotion and can become part of an ongoing fraud war. It can be used to make scams more convincing. Think about Roman scams, where people try to befriend you. Or even business email, corporate scams, where people send emails to finance teams, trying to get them to change bank accounts, from which you pay vendors. What happens if you reroute funds to other accounts, or how do you convince people to do that? The use and power of AI can make those emails look a lot more authentic.
We’ve already seen in the past holiday season a huge, massive manipulator problem, and they were very scalable. Just thinking about the ability to use those tools and get more scale, it can be a big thing and a game changer for the online crime community.
It already has been used to create all sorts of different materials and broadcasts, and it’s likely going to get even worse. It can significantly impact crime in the service industry; all the things that are being sold on the dark web. You can either buy data, or buy access to data that is being achieved by simple AI manipulation on people. It can be used to get addresses of people. It can be used to get payment information of people, social security numbers. Potentially, if the scam is good enough, you might unknowingly expose your entire identity and private details to those fraudsters.
[Generative AI] has so much potential to help us evolve with people, and all the new cool things that we’re seeing day to day that these types of tools can be applied for. But when you think about it there’s a lot of fraud that you can do with those tools at the same time.
Organizations need to be prepared, because it is coming. You know what happens when the criminal experts start exploring the ways they can use generative AI. Are there reasons to panic? Maybe, and maybe not. Most important is that we know what this tool is capable of, and how to think differently when we’re encountering things that might look like a scam. It’s a big deal. It’s a big deal for better, but also for worse, so we just need to be ready.”