While the positive opportunities of generative artificial intelligence have been discussed and dissected at great length, less attention has been paid to the potential threats posed when AI is used as a tool by bad actors.
Just as the technology is evolving, so too are the types of retail crimes it can power, according to a panel of security leaders at NRF PROTECT in California in June.
What we’ve seen “over the course of the last decade is an increase in automation and connectivity via technology, which is great,” said Katie Craven, Visa’s head of risk and identity solutions management for North America. “But fraud has also become much more automated and connected, and some of the barriers to large scale attacks like language, location … have been overcome by AI.”
Damon Bagley, information security leader, Living Spaces
For example, cyber adversaries are using AI to carry out targeted phishing and social engineering attacks and facilitate scams. “AI is helping them,” said Damon Bagley, information security leader for Living Spaces. “It takes the D-players and make them A-players.”
Some of the newest AI-powered frauds and scams include tactics known as “SEO poisoning,” Bagley said, where cybercriminals can use AI to ensure their response and page link shows up at the top of Google searches. When a potential customer clicks on the link, they may be infiltrated by malware.
Criminals can also use AI to make synthetic audio or video that can impersonate a retail CEO or an in-store environment and post it on social media. “All of us have incident response plans, but what happens when a social media post is out there and it’s not yours,” Bagley asked. “All of a sudden there’s a video out there. What’s your response? Is someone looking for it?”
“It’s almost like an existential threat,” Craven said. “We can’t even trust our senses anymore.”
Meanwhile, retail workers in areas such as HR and marketing are eager to use the new technology, which means retail cyber leaders face new responsibilities to ensure the security of AI systems and tools. “At Living Spaces, we really wanted to get ahead of it,” Bagley said. “We thought of ChatGPT as the next Google, so we wanted to figure out the models.”
After some positive initial copywriting and description results, Bagley and his team began to worry about what sensitive data from other departments such as accounting or HR might make its way to the internet. “That’s when we said we need to put governance around this. At least, don’t put all this customer information into ChatGPT,” he said.
It all comes down to protecting the data, said Mark Weatherford, chief cybersecurity strategist at Coalfire. “You can’t have AI without data. It’s statistics plus plus, statistics on steroids,” he said. “A new impetus on us as technologists and security professionals is to look at the data, who has access to it, when and where to they have access to it, and who can they send it to and how.”
Government regulators around the world are trying to get their arms around the new and ever-evolving technology. “We get very enamored with gen AI, but one of the big transformational changes right now is legislative,” Weatherford said. “While AI is sexy, the policy and governance piece of it is equally important to what we’re seeing on the technology side.”
In the U.S., President Biden issued an executive order on the safe and secure use of AI in October. The European Union recently passed the AI Act, the world's first comprehensive AI law. “It’s not like governments are just standing back and waiting for it to happen, but the technology is just moving so fast,” Weatherford said.
Meanwhile, retail security leaders are gearing up to deal with the evolving policy landscape as well as a new iteration of AI-powered cybercrimes. “There is no silver bullet in fraud fighting. It’s always going to be a layered approach,” Craven said.
“Make sure you are on top of emerging tech and scams, so you have awareness of it. And the other piece is education of both employees and customers. We are the weakest link.”