AI

AI in Financial Services

02 Feb 2026

Innovation without losing control

Artificial intelligence (AI) is rapidly reshaping global industries, specifically financial services. AI has become the best assistant in automated customer support, fraud detection, operational efficiency and even investment analytics. AI tools and software's are now the norm across banks, insurers, financial advisors and asset managers.

Similar to the internet revolution, businesses are seeing many meaningful opportunities arise with the introduction and bettering of AI technologies, however it also introduces significant regulatory, governance and compliance risks.

Financial Service Providers (FSPs) need to consider how its business will implement and maintain AI responsibly and lawfully.

Understanding what AI really is?

AI refers to a system that can analyse data, identify patterns and generate outputs that would normally require human intelligence. These systems do not “think” or “reason” in a human sense, they operate based on rules, training data, algorithms and probability models. These systems are fast but not always correct.

In financial services, AI is used for: chatbots on websites, virtual assistances, document drafting and summarisation, transaction monitoring and fraud detection, credit risk modelling, market automations and support in operations.

AI should not replace regulatory responsibilities. Key Individuals and the Boards of Directors will remain accountable for the licensed entity.

AI hallucination vs accuracy risks

Have you ever asked AI something and been suspicious of whether the information it was telling you was true? Have you second guessed it? Interesting how sometimes AI will admit it was wrong and give you a whole other opinion. However to someone unknowing they could believe the information was correct.

Even though AI can produce professional documentation, the information generated is not always accurate and factual. This can lead to problems down the line, especially if the people drafting the documentation don’t have the industry qualification or know-how to determine whether or not what AI has generated is in fact, true.

Many people are using AI to try and get ahead in financial services often relying on the the technology’s answers at face value. AI should be used to support business and not replace core functions, qualified personnel or provide a regulated activity such as investment decisions or provide advice.

For key company documentation such as policies, business flow plans, rendering of financial services (such as advice or intermediary services) AI should not be used. AI should not be replacing licensed human judgement or guiding Key Individuals on business strategy.

At the end of the day, the Regulator will hold key individuals accountable, not technology.

AI isn’t your ticket into financial services

The financial services industry is seeing many new businesses and FSPs entering the market. Many new FSPs are built on an AI generated foundation. The Businesses CEO’s and visionaries pitch their business ideas to AI, which in turn generates a business structure and blue print. Traditionally, CEOs and managers would qualify as the businesses Key Individual. We are now seeing a movement where Key Individuals are being hired to tick the box and “keep” the regulatory happy, because CEO’s are outsourcing that responsibility instead of qualifying themselves.

Regulatory expectation is that the key person running the business have fit and proper capabilities which means having a recognised qualification, written the appropriate Regulatory Exams (RE), having a sound understanding of the workings of the financial service industry, specifically the business the FSP operates in (such as crypto products, insurance products, pension products) all while acting ethically.

Paper Business Risk

One of the FSCA’s growing concerns of “paper businesses”. Paper Businesses are FSPs that appear compliant on paper but lack internal expertise required to operate safely. Where AI is used extensively to design business models, governance frameworks, policies and procedures. Documentation can look sophisticated, yet fail to reflect the reality of the day to day operations.

This often leads to compliance only existing on paper and a disconnect between how the business functions vs how it should be functioning. In such environments, Key Individuals struggle to demonstrate genuine ownership of their regulatory obligations.

The FSCA perspective on this disconnect can be considered a material misconduct and operational risk. Effective regulations depends not on what is written, but what is understood, implemented and actively overseen in business.

How Regulators are looking at AI

In South Africa and the UK, the Financial Sector Conduct Authority (FSCA) and the FCA (Financial Conduct Authority) have both acknowledged AI’s growing role and need to ensure that is it governed responsibly.

FSCA emerging focus on AI

The FSCA has signalled that AI is a priority for supervision, although has not issued a formal indication of drafting formal legislation. AI adoption is increasingly assessed through the combined application of FAIS, TCF, the policies and procedures of a FSP. The use of AI is already regulated indirectly through the Protection of Personal Information Act (POPIA).

In November 2025, FSCA and the Prudential Authority published a report on AI in the South African Financial Sector (Linked: Artificial Intelligence in the South African Financial Sector), offering the first comprehensive overview of AI adoption across banks, insurers, investment managers and payment providers.

The report explains an increase use for AI and emphasis risks including

  1. Consumer protection

  2. Cyber Security and stability Risks

  3. Potential Biases or discriminatory outcomes from programming of AI technology.

FCA’s principle based approach

By contrast, the UK’s FCA has been more vocal of how AI fits into its existing regulatory frameworks. Rather than introducing standalone FCA AI laws, the FCA are applying current rules and principles set out in Consumer Duty, The Senior Managers & Certification Regime (SM & CR) and model risk expectation to AI use, while promoting innovation and experimentation. The FCA take a principle- based and outcomes focused stance, intervening on serious failures rather than routine issues.

AI will undoubtably remain a powerful tool within financial services. When used correctly it can enhance efficiency and strengthen monitoring and support better client outcomes however AI is not a substitute for competence, governance or accountability. In the evolving regulatory space, successful FSPs will not be those who adopt AI the fastest, but those who adopt it responsibly, transparently and with full ownership of the risks it brings along with it.

get in touch