Home Fraud prevention – protecting yourself from AI-powered fraud

Fraud prevention – protecting yourself from AI-powered fraud

Posted by on October 15th, 2024.

Your safety and the security of your money is our highest priority, and we implement stringent safeguarding measures to protect all our customers. However, it’s also important for you to take steps to shield yourself from fraud.

To assist you in identifying potential scams and staying secure, we are publishing a series of articles focused on fraud protection. This article specifically addresses AI-powered fraud

What is AI-powered fraud?

AI fraud occurs when scammers use artificial intelligence, including tools like ChatGPT, to trick people into giving up money or personal data. These scams often involve highly convincing phishing emails, text messages, or social media interactions. Advances in technology have made it possible for scammers to clone voices or create ‘deepfake’ videos and images.

Scammers leverage these technologies to craft convincing messages from trusted sources, such as banks or employers, or even mimic loved ones, persuading victims to send money or click on malicious links, which can lead to significant financial or security risks.

An example of AI-powered fraud

Sarah had recently helped cover the cost of her daughter’s driving lessons. When she finally passed her test Sarah proudly posted a photo of Emma with her daughter, Emma, with her driving licence on Facebook.

A few weeks later, Sarah received a phone call claiming to be from Emma, saying she had been in a car accident and urgently needed money for repairs. Distraught, Sarah immediately sent funds to the garage details Emma provided over the phone.

Later that day, when Sarah saw Emma and her car in perfect condition, she was confused.

It turned out Emma had never made that call. The scammer had used an AI tool to clone Emma’s voice and had gathered information from Sarah’s social media posts, manipulating her into sending the money.

Five tips on how to protect yourself

1. Verify unrecognised senders

Phishing emails and fake texts or phone calls are becoming more sophisticated with AI technology. These messages can appear highly realistic, mimicking legitimate organisations. Even though the grammar and spelling mistakes that often betray a scam might no longer be present, it’s still essential to approach any unsolicited communication with caution.

Always verify the sender before responding. If an email appears to be from a known institution, like a bank, call them directly using an official number to confirm the legitimacy of the message.

2. Be wary of unexpected links and attachments

Scammers are increasingly using AI to send harmful links and attachments. Clicking these links can install viruses or malware that allow scammers to access your personal information.

If you receive an unexpected link or attachment, think twice before opening it. Ensure the sender is genuine, and if in doubt, avoid clicking on it altogether.

3. Beware of urgent and unusual requests

In ‘family scams’ like Sarah’s, urgency is often used to manipulate victims. Scammers exploit the trust between loved ones, prompting victims to act quickly without thinking things through.

Watch for any unusual or urgent requests, particularly if you’re asked to send money using unfamiliar methods or to an account that isn’t associated with the person you’re supposedly helping. If something feels off, hang up and call the person directly using a verified number.

4. Secure Your Devices and Accounts

A strong password or passphrase that mixes numbers, letters, and special characters can offer a basic but effective defence against AI-driven scams.

Additionally, enable multi-factor authentication (MFA) for extra security. MFA requires you to provide multiple forms of identification, making it more difficult for scammers to access your accounts.

5. Keep Your Software Updated

Outdated software is vulnerable to attacks, and scammers using AI can identify these weaknesses. Ensure your devices’ software is always up to date to minimise the risk of a cyberattack.

If someone targets you

If you suspect someone has gained access to your online accounts, personal data, or financial information, report it immediately.

If you believe you are a victim of fraud you should inform the police and use the Action Fraud online reporting tool.

If you receive a suspicious email, you can report it to the Suspicious Email Reporting Service at [email protected], and if you receive a concerning text message, forward it to 7726.

More information on AI fraud

As AI technology continues to evolve, these types of scams will likely become more prevalent. Staying informed is your best defence. For more information on how to detect and protect yourself from AI-powered fraud, consult additional resources Finra for guidance on avoiding AI fraudsters.

The CIFAS and Action Fraud websites, also offer plenty of useful resources and guidance.

Another excellent resource is Stop ID Fraud, which is dedicated to identity fraud and theft, while Take Five is a national anti-fraud campaign run by the trade association UK Finance.

If you’ve been a victim of identity fraud, Victim Support can offer assistance, information, and advice.

Finally, if you’re worried that your TorFX account may be at risk, contact us as soon as possible and we’ll be happy to help. You can also download our app, or use our online platform, to keep an eye on your transfers.

© TorFX. Unauthorised copying or re-wording of this blog content is prohibited. The copyright of this content is owned by Tor Currency Exchange Ltd. Any unauthorised copying or re-wording will constitute an infringement of copyright.