8 Steps CCOs Should Take Now to Prepare for AI Regulation

Author

Aaron Pinnick

Publish Date

Type

Article

Topics
  • Compliance
  • Cybersecurity
  • Artificial Intelligence (AI)

Without question, one of the hottest topics for firms in 2023 was the emergence and rapid adoption of AI-based tools and technologies in the workplace. From the meteoric rise of OpenAI’s ChatGPT, which recorded over 57 million accounts in its first month of public availability, to Microsoft’s integration of Copilot into its suite of Office products, AI has been at the forefront of the business world as leaders work to use these new tools to streamline operations and create dramatic gains in efficiency.  

For compliance and assurance functions, the rapid expansion of AI use creates significant opportunities for more efficient and effective compliance oversight, that must be balanced with the new risks that AI tools pose to the organization. AI-powered tools can help compliance leaders automate routine work, monitor communications and trading, and better assess risk across the firm. Tasks that had once required the commitment of significant staff time and resources, may now be completed in seconds, on a scale that would have been impossible only a few years ago.  

However, the rapid adoption of AI by the firms also means compliance executives must monitor new data privacy and intellectual property risks, compliance risks associated with the potential for biases and conflicts of interest in the models of the tool, and hallucination risks that can arise when these tools produce inaccurate results. These risks are significant, both for the financial health of the firm and the broader financial system, with U.S. Securities and Exchange Commission (SEC) Chair Gensler going so far as to state that tools and technologies will likely lead to a future financial crisis.  

The Regulatory Response to AI 

The response to this surge in AI use by the world’s regulatory bodies has been swift, with the U.S., EU, China, Japan, India, and at least thirty other countries proposing some form of AI-related legal framework to manage the various risks AI poses to consumers, privacy, and society in general.  

For financial services firms in the U.S., the SEC, Congress, and FINRA have all acknowledged the risks that AI poses to the stability of markets and to investors. To combat these risks, there has been significant regulatory activity to create new guidance and rules for firms to follow and an effort to better understand how to reduce the likelihood of AI-related market disruption. Examples of this activity include:  

  • The SEC’s Conflicts of Interest and Predictive Analytics Proposal In July of 2023, the SEC released its proposal to designed to mitigate the potential for conflicts of interest that may arise when firms use predictive analytics and AI in client interactions. If adopted, firms that use certain predictive analytics tools will need to ensure that these tools are designed to place the investor’s interests ahead of the firm’s financial interests. Firms will also need to demonstrate how the underlying predictive models are tested for conflicts of interest and maintain appropriate records of these tests. As proposed, the rule would apply to a broadly defined set of “covered technology”, including some spreadsheets and statistical models that fall outside of the common understanding of AI. 

  • The SEC’s AI Sweep – Across the third and fourth quarter of 2023, the Division of Examinations launched a sweep of investment advisers, requesting a wide range of information regarding how AI was being used. The SEC requested documentation related to: where/how AI is currently being used; policies and procedures that govern AI use; security controls that protect client data; the source and developer of AI tools being used; training on appropriate use of AI; any regulatory, legal, or ethical incidents related to AI; marketing materials related to AI; business continuity plans related to AI system failures; and reports of test or validation of AI models for accuracy and conflicts. The result of this sweep was the SEC noting deficiencies related to the misrepresentation of how AI was used in investment decisions, and misleading statements around the sophistication of the AI tools being used by firms. 

  • The Financial Artificial Intelligence Risk Reduction Act (FAIRR Act)In December of 2023, Sens. Warner and Kennedy introduced bipartisan legislation that would require the Financial Stability Oversight Council (FSOC) to coordinate the response of financial regulatory bodies to the threat that AI poses to financial markets. The FSOC is to study this risk and identify gaps in existing regulations and guidance on AI risk. The legislation also gives the SEC greater power to impose fines and penalties in response to AI models being used for market manipulation and fraud. While the legislation has currently not passed the Senate or the House, it signals the congressional will to address the risks posed by AI. 

  • FINRA’s Rules 3110 and 2010 – In its 2024 Annual Regulatory Oversight Report, FINRA identified AI as an emerging risk that has the potential to affect many aspects of the operations of broker-dealers, and raises concerns about accuracy, privacy, and bias within these tools. To help combat these concerns, the organization has reiterated how firms that use AI tools would need to ensure that these tools, and the associated activities these tools support, would be subject to Rule 3110 and Rule 2010. These rules would require firms to supervise all activities of associated persons, including activities driven by AI technology, and to remove biases in the data that is used by AI technologies.  

Although the UK has not proposed any AI-related legal framework, the Bank of England (BoE), Financial Conduct Authority (FCA), and the Prudential Regulation Authority (PRA) reminded financial institutions to be mindful of the risks with the publication of a joint discussion paper in October of 2023. They state that the existing UK regulatory frameworks are appropriately broad to encompass the existing ways in which AI is being used by financial services firms. 

8 Steps to Better Manage AI Risk and Compliance 

While few AI-related regulations have yet to be finalized, it is clear that regulation around the use of AI in the financial services industry is top-of-mind for regulators. CCOs should begin taking steps now to not only get ahead of regulatory risks in the future, but to also ensure that AI-tools do not create additional privacy, cybersecurity, and compliance risks. Key steps for CCOs to take include: 

  1. Establish an AI Governance Committee, which can provide consistent oversight of any new AI-based tools the firm may seek to adopt.  
  2. Inventory the firm’s current AI use, and assess the risks associated with the firm’s AI use cases and tools. 
  3. Evaluate the data security and data sources of the firm’s AI tools, to identify areas of potential bias and privacy risk.  
  4. Develop an acceptable use policy (AUP) for the use of AI-based tools. The policy should include clear guidance for employees on when and how AI tools can be used and should address the elements noted in the SEC’s recent AI exam sweep. 
  5. Review AI-related disclosures and marketing materials to reduce the likelihood of “AI-Washing.” 
  6. Communicate and train employees on AI-related policies, procedures, and governance to avoid potential regulatory risk. 
  7. Implement GRC controls around the firm’s AI use, including cybersecurity, privacy, supervision, model validation, scenario analyses, tabletop exercises, and incident reporting.  
  8. Include questions on AI use, policies, and controls in due diligence assessments of vendors. 

How We Help 

Countries across the globe are warning financial firms of the risks inherent with using AI, and in some cases, taking decisive action to establish rules for how advisers and financial services firms are using AI. Firms need to be aware of this as they begin integrating AI tools into their work. Compliance and cybersecurity leaders should not only begin preparing the documentation necessary to satisfy the SEC, but they must take the initiative to educate the firm on the potential legal and regulatory risks associated with the use of AI. 

ACA's regulatory compliance, cybersecurity, and privacy consultants can help clients meet the evolving challenges of AI risks through the following services: 

  • Cybersecurity risk assessments, which explore the usage and risks of generative AI.  

  • Templates and guidance on acceptable use policies for generative AI that can be tailored to the organization. 

  • Tabletop exercises designed to simulate generative AI risk scenarios. 

  • Enhanced vendor due diligence offerings that evaluates vendors’ use of generative AI  

  • Expert guidance on privacy and regulatory issues that are raised through the use of AI.  

To learn more about the SEC’s recent AI rulemaking and examination sweeps, or how ACA can support you to enhance your policies regarding the use of AI, please don’t hesitate to reach out to your consultant or contact us here

Contact us