• 4 MIN READ

The Escalation of Online Financial Deceptions Through AI

February 16, 2024

The digital age has ushered in a significant increase in fraudulent activities, where fraudsters create false identities to swindle money from either financial bodies or their clientele. Experts anticipate that these criminal acts will become more rampant as artificial intelligence (AI) technology becomes more widespread.

A recent examination of opinions from 500 individuals specialising in fraud and risk within the financial sector, as initially reported by ABC News, highlights a profound concern over the escalating issue of fictitious online entities. The worry centres around whether the current technological measures for security and identity verification utilised by banks and credit service providers can effectively counteract this trend.

Understanding Synthetic Fraud

Professionals in the field point out that entities involved in providing financial services, such as loan provision, credit card issuance, and credit assessments, have historically battled against fraudsters who usurp someone else’s personal details to construct fraudulent identities for monetary benefit. This form of deception, known as “synthetic” fraud, has evolved with the advent of generative AI technologies.

  • Generative AI: This encompasses AI tools capable of generating content across various media – text, imagery, audio, and video – from minimal prompts.
  • Fraud Scales: Experts note that fraudulent activities vary widely, from complex financial schemes to simpler phishing attempts, where malevolent communications are crafted to invade an individual’s privacy for information theft.
  • AI’s Role in Fraud: AI aids criminals by enabling rapid, automated data collection from the internet. With a mix of stolen, fabricated, and legitimate data, these individuals can convincingly impersonate others.

Generative AI enhances the sophistication and speed of scams by facilitating:

  • Easier dissemination of phishing communications.
  • Creation of digital footprints for fictitious identities to appear genuine.
  • Duplication of activities to impersonate real individuals, thus collecting more personal data.

The Wakefield survey is among the latest efforts by security specialists, both within and external to the financial sector, to sound alarms about these threats. This includes cautions from major credit card issuers and consumer protection advocates, highlighting the complexity of combating synthetic fraud compared to traditional identity theft. In synthetic fraud, real and fake data are merged, making detection by credit monitoring and security services more challenging.

“As the security measures for banking and payments evolve, fraudsters are increasingly adopting impersonation strategies,” according to the survey findings. The aim is to deceive individuals and businesses into making payments under the false belief they are transacting with legitimate entities.

Victims often targeted include:

  • Individuals who are less likely to check their credit reports regularly.
  • Those with easily accessible information online.
  • Groups less aware of fraud risks, notably the young and elderly.

Effective responses to AI-facilitated fraud demand vast amounts of legitimate data for pattern recognition, enabling security experts to identify and act against illegal activities.

Addressing synthetic fraud presents a more complex challenge than conventional identity theft. In the traditional scenario, a perpetrator unlawfully acquires an individual’s personal details, like their name and other sensitive information, to carry out financial deceit. This blend makes it significantly more difficult for credit surveillance services and additional protective measures to detect fraud.

Read also: AI and Mobile Security: UK’s Response to Fraud in Financial Services

Chatbots could emerge as the newest instrument in the toolkit of internet fraudsters

Even though ChatGPT is specifically designed to steer clear of participating in any online fraudulent activities, the potential of artificial intelligence (AI) to transform the tactics of scammers exists.

Chatbots might be utilised to enhance the effectiveness of phishing emails, given the absence of restrictions against producing messages that prompt users to sign into their online accounts for purported security measures. Additionally, AI-powered text-to-speech capabilities offer a means of circumventing voice verification systems. This technology allows for the conversion of written text into vocal audio, replicating a person’s voice accurately enough that individuals have been deceived by messages from “family members” created using voice cloning techniques.

Given these developments, the threat posed by the malicious use of artificial intelligence is significant for both businesses and individuals. Currently, scams typically involve direct interactions between the scammer and the victim, allowing the scammer to concentrate on a single target at a time. However, the adoption of chatbots and text-to-speech technology could radically alter this landscape, facilitating the widespread application of deceptive practices. The future might see the replacement of manual scam operations with automated systems, potentially amplifying the scale of fraud.

Consider this hypothetical scenario:

  • A scammer dispatches a phishing email.
  • The recipient provides their sensitive information, including login credentials, phone number, and their bank advisor’s name, within the form.
  • A bot then contacts the bank, capturing the advisor’s voice with just a short recording needed for voice reproduction.
  • Subsequently, the chatbot can call the victim, mimicking the bank advisor’s voice to persuade them to authorise a transaction or a bank transfer.

Protective Measures Against AI-Driven Investment Fraud

The article published on www.nasdaq.com regarding Artificial Intelligence (AI) and Investment Fraud sheds light on the potential criminal uses of AI in the investment sector and outlines strategies for safeguarding one’s financial assets.

Key Recommendations for Investor Safety:

  • Unauthorised Investment Platforms Using AI: Investors should be cautious of platforms and individuals claiming to use AI for trading without proper registration. The absence of registration is a red flag, prompting further investigation before any investment should be made.
  • Be wary of platforms making extravagant promises, such as infallible AI trading systems or guaranteed stock picks.
  • Fraudulent schemes often exploit AI’s allure to attract unwary investors.
  • Investment Opportunities Too Good to Be True: Offers promising high returns with minimal risk are classic indicators of fraud. Fraudsters employ sophisticated techniques to make their scams appear credible.
  • Investing in AI-Related Companies: While investing in companies claiming to be at the forefront of AI technology might seem appealing, caution is advised.
  • Be sceptical of companies making unsubstantiated claims about how AI will boost their profitability.
  • Watch out for red flags such as high-pressure sales tactics, promises of rapid profits, or guaranteed returns with minimal risk.
  • The Dangers of Microcap Stocks: These stocks, including penny and nano-cap stocks, are especially susceptible to fraudulent activities, including those involving AI.
  • Limited public information on microcap companies makes it easier for fraudsters to spread misinformation and manipulate stock prices.
  • AI-Enhanced Deception Tactics: Fraudsters are increasingly using AI to create fake audio, alter images, or produce counterfeit videos to disseminate false information.
  • Deepfake technology and AI-generated materials can be used to impersonate known individuals or entities to dupe investors into fraudulent transactions.
  • Independence from AI-Generated Investment Advice: Exercise caution when relying on AI for investment decisions. AI-generated information can be based on inaccurate or misleading data, leading to faulty conclusions.

Preventative Steps for Individual Protection:

  • Verify the authenticity of information and cross-reference multiple sources before making investment decisions.
  • Engage with registered investment professionals to enquire and seek advice.
  • Visit reputable websites such as Investor.gov, nasaa.org, or finra.org for additional tips on wise investing and fraud avoidance.

FAQ

How is AI technology contributing to the increase in online financial fraud?

AI technology contributes to online financial fraud by enabling scammers to create more convincing false identities and execute sophisticated schemes at a scale previously unattainable. Generative AI, for example, can produce realistic text, images, audio, and video content, facilitating the creation of fake digital footprints and impersonating real individuals to commit fraud.

What is synthetic fraud, and why is it a concern for financial institutions?

Synthetic fraud involves the creation of false identities using a combination of real and fake information. It poses a significant challenge for financial institutions because it blends legitimate data with fabricated information.

Can chatbots be used for fraudulent activities? 

Yes, chatbots can be used for fraudulent activities by enhancing the effectiveness of phishing emails and circumventing voice verification systems. Scammers can use AI-powered chatbots to automate and scale deceptive communications, such as emails that trick users into revealing sensitive information. Furthermore, AI-powered text-to-speech capabilities can mimic a person’s voice convincingly, tricking individuals into believing they are communicating with a trusted contact.

What strategies can individuals use to protect themselves from AI-enhanced financial scams?

Individuals can protect themselves from AI-enhanced financial scams by being cautious of investment platforms and offers that seem too good to be true, verifying the authenticity of information, and consulting with registered investment professionals. Additionally, staying informed about the latest fraud tactics and relying on reputable sources for investment advice can help mitigate the risks of falling victim to such scams.

Secure Your Money with Payrow

Research shows that the use of artificial intelligence could also trigger an increase in fraud in the financial industry. Investment in fraud prevention and cyber resilience will need to accelerate to offset the gaps and risks created by cyberattacks and fraudsters. 

Payrow is committed to strengthening defences against increasingly sophisticated financial fraud while adhering to evolving data protection standards. Our commitment goes beyond mere compliance. We aim to maintain operational efficiency and simplify routine business processes while strictly adhering to robust UK data protection regulations.

We recognise the importance of meeting global standards and focus on providing financial services that are both secure and efficient, prioritising the protection of customer data. Business owners who choose us as their partner can be sure that their financial transactions and data will be as secure as possible.

Our goal is to ensure that every interaction you have with Payrow is secure, seamless, and trustworthy.

Follow us on our social media channels: