AI in UK Financial Services: Unbelievable Potential is Not Without Compliance Risk

Author

Roxana Nadershahi

Publish Date

Type

Article

Topics
  • Compliance
  • Cybersecurity

Artificial intelligence (AI) and the potential impacts and implications on the asset management industry are rapidly changing. The latest joint discussion paper FS2/23 issued on the 23 October 2023 by the Bank of England (BoE), Financial Conduct Authority (FCA), and the Prudential Regulation Authority (PRA), collectively the “supervisory authorities” in the UK, have outlined that the existing UK regulatory frameworks are appropriately broad to encompass the existing ways in which AI is being used by financial services firms.

While no policy changes were implied, the tone left readers feeling that until new developments are seen at the UK legislative level, AI in financial services is very much an “in progress” issue.

Drilling into what that means for firms, FCA CEO Nikhil Rathi’s statement earlier this year (July 2023) pointed to governance and principles for top-down oversight of AI in the asset management industry. The FCA is not a regulator of technology, but it is considering how the uses of AI and machine learning impact or pose risks to financial services.

Rathi’s statement focused on the consumer outcomes where “Big Tech” is concerned, referring to payments, transactions, retail services, and financial advice since data sharing and behavioral biases may arise where AI is more heavily involved in making financial decisions. Wider concerns rest on market integrity in the age of real time, unedited social media and how this can be manipulated by AI generators – either in causing sharp trading volatility or false invitations to invest in non-existent investment schemes.

Clearly the potential for AI to be used in the wrong way in financial services are plentiful.

Given that AI’s role in investment firms is changing, we look at the current challenge of whether the existing compliance frameworks are really enough to mitigate AI risks and make internal compliance teams aware of the cross-over with technology, cybersecurity, and third-party providers.

Addressing AI risks without direct regulatory pressure may seem pre-emptive, however we strongly feel the shift from debate to legislation across multiple jurisdictions is too fast to ignore.

Existing compliance frameworks should be amended with an AI lens

Unsurprisingly, the FCA pointed to existing frameworks where AI risk could be assessed on a practical level as part of the working version of the compliance monitoring programme. These areas include:

  • Operational risk and resilience - e.g third party data sharing, IT providers, trading platforms and external facing platforms such as websites or social media, cybersecurity infrastructure
  • Senior management - e.g accountability, although at this stage the BoE, PRA, and FCA have no plans to point to any role specifically given the breadth of coverage in all functions AI is expected to have
  • Systems and controls - e.g governance and decision making around where AI is used in the business and how it is monitored
  • Monitoring of platforms, systems, or functions that utilize AI - e.g embedding tests and controls in more business areas where AI is being used on an ongoing basis
  • Client/consumer outcomes - e.g treating customers fairly, the decision making during onboarding and offboarding investors
  • Disclosures or transparency when AI is used in an investment business

So what next?

AI comes with opportunities as well as risks. While compliance professionals look to reduce risk, we recognize that firms would also be considering new use cases where AI can streamline and cut costs.

The FCA is developing its position about where accountability sits at a firm if AI were to go wrong – unsurprisingly this is not an enviable task given that right now, any losses for a UK customer or client would point to shortcomings within a business function or senior manager.

In a new world where we expect integrated AI to be the norm, the decision trees on who to blame may be less clear. Harking back to the feedback from the FS2/23 discussion paper issued by the BoE, PRA and FCA, there is still a piece-by-piece approach to regulation in this area (data protection challenges, lack of alignment on AI implications between international regulators) and more work needs to be done to navigate these new compliance risks. We believe a more prepared compliance programme puts firms in a less-reactive position and allows for businesses to use AI in innovate ways.

How we help

ACA's regulatory compliance, cybersecurity, and privacy consultants can help clients draft policies, train employees, and assess readiness to respond to regulatory inquiries regarding AI. ACA Signature can help. Choose the combination of compliance advisory services,  innovative technology, managed services, and cybersecurity that is right for your firm to gain expert insight, guidance and support as you navigate emerging challenges like AI.  

To learn more about how ACA can support you to enhance your policies regarding the use of AI, please don’t hesitate to reach out to your consultant or contact us here.

Contact us