The sneakiest AI scams of 2024: voice cloning, ChatGPT phishing, deepfakes and more

With experts warning that AI is making us more vulnerable to cyber scams, Katy Ward reveals how you can protect yourself against this growing threat.

It seems like every day there is a new report of yet another breakthrough in Artificial Intelligence (AI) technology.

AI can help with cancer diagnosis, control driverless cars and even write novels.

However, experts also fear that the growth of this technology will make it easier for scammers to get hold of our cash.

At the start of 2024, fraud watchdog the National Cyber Security Centre (NCSC) issued a report warning that “AI will almost certainly increase the volume and heighten the impact of cyber-attacks over the next two years.”

In fact, the UK government is so concerned that it last week issued new guidance on protection against AI-driven fraud.

In this article, we look at five common AI-generated scams and how you can safeguard yourself against this type of fraud.

Voice cloning

Voice cloning allows cybercriminals to create an AI-generated replica of a person’s voice, typically in an attempt to con money out of their friends and family.

Fraudsters will often find a recording of someone’s voice on social media and recreate it using sophisticated algorithms.

A clip of just seconds can be enough to produce a convincing copy of that person’s speech patterns.

In one of the best-known examples, an AI-generated Joe Biden instructed New Hampshire residents not to vote in upcoming elections.

For ordinary people, the scam typically involves a panicked, and realistic-sounding phone call, in the guise of a loved one desperately in need of emergency cash.

Read more about the most convincing little-known scams of 2024

How you can protect yourself

While it’s natural to want to act quickly if you believe a family member is in danger, it’s important to think rationally in this situation.

Once the call has ended, try ringing the person back to check whether they really need money.

You may also want to create a safe word with your family that you can use in legitimate emergencies.

More sophisticated phishing attacks

Fraudsters have been sending dodgy emails posing as legitimate businesses for years. However, sophisticated AI tools are now making these scams almost impossible to detect.

Most AI-based phishing attacks use a type of technology known as large language models (LLM), of which ChatGPT and Bard are the best-known examples.

These can generate large amounts of human-sounding text in seconds based on prompts from users.

For example, a fraudster might input the following instruction: “Write a 200-word professional-sounding email asking a customer to update their account details by clicking on this link.”

Even more worrying scammers can use this technology to create emails specifically designed to fool a particular victim (or group of victims).

For example, a criminal might ask the chatbot to write the same email using language that would appeal to a 38-year-old professional woman, or a 75-year-old retired teacher.

What’s more, this AI-generated text won’t typically contain the spelling or grammatical errors that have traditionally been a dead giveaway of a phishing scam.

How you can protect yourself

Whenever you receive an unexpected email from your bank or any other business, consider whether it makes sense that you have received it at this time, and if it relates to your financial situation.

If you have any doubts, it doesn’t hurt to call the company to double-check.

Remember, you should use a phone number from a verifiable source, rather than contact details within the suspicious email.

If an email asks you to confirm any personal details, you should always do so by logging into your online account, instead of clicking on a link within the message.

Beat the scammers: insider tips from a hacker

Deepfakes

Deepfake scams rely on facial manipulation technology to create video images or photos of real-life people in fictional situations.

As deepfakes frequently feature trusted figures, these scams can spread harmful misinformation and damage the reputations of those involved.

High-profile targets have included Rishi Sunak, Piers Morgan and even the Pope.

These scams can be particularly damaging when the deepfake image of a well-respected person encourages you to invest in a fraudulent company or product.

How you can protect yourself

If you see a video or photo of someone saying or doing something that seems out of character, you should question its authenticity.

Although many deepfakes are extremely convincing, there can be clues that the image isn’t real, particularly in the person’s face. For example, the skin may appear too smooth or the skin tone could look uneven.

Human jobs that robots and AI are stealing

Fake customer service chatbots

For many of us, online chat is our preferred method of dealing with our customer service issues.

And cybercriminals are using this to their advantage.

In one of the sneakiest scams of 2024, fraudsters are creating their own chatbots on company websites or social media masquerading as legitimate customer support tools.

After initiating a chat, the alleged customer service rep may ask for details about your account, such as your date of birth or credit card number.

All these questions are, of course, designed to get their hands on your personal details.

How to protect yourself

While it can be extremely difficult to tell that you are chatting with an AI bot and not an actual human, there can be telltale signs that something isn’t right.

AI chatbots often repeat the same phrases or will give vague answers that don’t quite match the details of your complaint.

Also, remember that a legitimate company should never ask you to provide your bank details or PIN via a chatbot.

Romance scams

AI tools such as ChatGPT can enable fraudsters to create high volumes of authentic-looking dating profiles.

Meanwhile, platforms such as ThisPersonDoesNotExist.com and Generated. Photos also allow scammers to create fake images for use as profile pictures.

Once a potential victim has expressed a romantic interest, AI bots can chat with this person for months to gain their trust.

This often leads to requests to send money or disclose personal information that can be used for identity theft.

In a particularly shocking example from the US, a man sent $60,000 to an AI-generated woman he’d met via a dating app.

Romance scams: what to watch out for

How to protect yourself

Whenever you’re dating online, you should be cautious of anyone who expresses strong emotion very early in the relationship or appears too good to be true.

Also, be wary of potential partners who respond to your messages almost immediately. Although AI chatbots can produce thousands of words in seconds, a real human could never reply at such speed.

Remember, it’s always a red flag when someone you’ve never met asks you to send them money.

Vigilance is key

With AI developing at an equally exciting and terrifying speed, scams are constantly becoming more sophisticated.

This means that our approach to fraud detection also needs to evolve.

In this new landscape, the best approach is to question everything. Whenever you receive an unsolicited message, try to take whatever steps are necessary to confirm everything is as it appears.

While this may seem extreme, falling victim to an AI scam could cost you thousands.

Comments


Be the first to comment

Do you want to comment on this article? You need to be signed in for this feature

Copyright © lovemoney.com All rights reserved.

 

loveMONEY.com Financial Services Limited is authorised and regulated by the Financial Conduct Authority (FCA) with Firm Reference Number (FRN): 479153.

loveMONEY.com is a company registered in England & Wales (Company Number: 7406028) with its registered address at First Floor Ridgeland House, 15 Carfax, Horsham, West Sussex, RH12 1DY, United Kingdom. loveMONEY.com Limited operates under the trading name of loveMONEY.com Financial Services Limited. We operate as a credit broker for consumer credit and do not lend directly. Our company maintains relationships with various affiliates and lenders, which we may promote within our editorial content in emails and on featured partner pages through affiliate links. Please note, that we may receive commission payments from some of the product and service providers featured on our website. In line with Consumer Duty regulations, we assess our partners to ensure they offer fair value, are transparent, and cater to the needs of all customers, including vulnerable groups. We continuously review our practices to ensure compliance with these standards. While we make every effort to ensure the accuracy and currency of our editorial content, users should independently verify information with their chosen product or service provider. This can be done by reviewing the product landing page information and the terms and conditions associated with the product. If you are uncertain whether a product is suitable, we strongly recommend seeking advice from a regulated independent financial advisor before applying for the products.