AI and machine learning helped Visa combat $40 billion in fraud activity

Blocks forming robot on white background.

Yuichiro Chino | Moment | Getty Images

Payments giant Visa is using artificial intelligence and machine learning to counter fraud, James Mirfin, global head of risk and identity solutions at Visa, told CNBC.

The company prevented $40 billion in fraudulent activity from October 2022 to September 2023, nearly double from a year ago.

Fraudulent tactics that scammers employ include using AI to generate primary account numbers and test them consistently, said Mirfin of Visa. The PAN is a card identifier, usually 16 digits but can be up to 19 digits in some instances, found on payments cards.

Using AI bots, criminals repeatedly attempt to submit online transactions through a combination of primary account numbers, card verification values (CVV) and expiration dates – until they get an approval response.

This method, known as an enumeration attack, leads to $1.1 billion in fraud losses annually, comprising a significant share of overall global losses due to fraud, according to Visa.

“We look at over 500 different attributes around [each] transaction, we score that and we create a score –that’s an AI model that will actually do that. We do about 300 billion transactions a year,” Mirfin told CNBC.

Each transaction is assigned a real-time risk score that helps detect and prevent enumeration attacks in transactions where a purchase is processed remotely without a physical card via a card reader or terminal.

“Every single one of those [transactions] has been processed by AI. It’s looking at a range of different attributes and we’re evaluating every single transaction,” Mirfin said.

“So if you see a new type of fraud happening, our model will see that, it will catch it, it will score those transactions as high risk and then our customers can decide not to approve those transactions.”

Using AI, Visa also rates the likelihood of fraud for token provisioning requests – to take on fraudsters who leverage social engineering and other scams to illegally provision tokens and perform fraudulent transactions.

In the last five years, the firm has invested $10 billion in technology that helps reduce fraud and increase network security.

Generative AI-enabled fraud

Cybercriminals are turning to generative AI and other emerging technologies including voice cloning and deepfakes to scam people, Mirfin warned.

“Romance scams, investment scams, pig butchering – they are all using AI,” he said.

Pig butchering refers to a scam tactic in which criminals build relationships with victims before convincing them to put their money into fake cryptocurrency trading or investment platforms.

“If you think about what they’re doing, it’s not a criminal sitting in a market picking up a phone and calling someone. They’re using some level of artificial intelligence, whether it’s a voice cloning, whether it’s a deepfake, whether it’s social engineering. They’re using artificial intelligence to enact different types of that,” Mirfin said.

Generative AI tools such as ChatGPT enable scammers to produce more convincing phishing messages to dupe people.

Cybercriminals using generative AI require less than three seconds of audio to clone a voice, according to U.S.-based identity and access management company Okta, which added that this can then be used to trick family members into thinking a loved one is in trouble or trick banking employees into transferring funds out of a victim’s account.

Generative AI tools have also been exploited to create celebrity deepfakes to deceive fans, said Okta.

“With the use of Generative AI and other emerging technologies, scams are more convincing than ever, leading to unprecedented losses for consumers,” Paul Fabara, chief risk and client services officer at Visa, said in the firm’s biannual threats report.

Cybercriminals using generative AI to commit fraud can do it for a lot cheaper by targeting multiple victims at one time using the same or less resources, said Deloitte’s Center for Financial Services in a report.

“Incidents like this will likely proliferate in the years ahead as bad actors find and deploy increasingly sophisticated, yet affordable, generative AI to defraud banks and their customers,” the report said, estimating that generative AI could increase fraud losses to $40 billion in the U.S. by 2027, from $12.3 billion in 2023.

Earlier this year, an employee at a Hong Kong-based firm sent $25 million to a fraudster that had deepfaked his chief financial officer and instructed to make the transfer.

Chinese state media reported a similar case in Shanxi province this year where an employee was duped into transferring 1.86 million yuan ($262,000) to a fraudster who used a deepfake of her boss in a video call.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Todays Chronic is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – todayschronic.com. The content will be deleted within 24 hours.

Leave a Comment