Insights

Regulatory update: An AI special edition

AI and Financial Regulation

The advent of generative artificial intelligence (GenAI) – particularly the introduction of the unexpectedly powerful ChatGPT in late 2022 – spurred renewed and pressing focus on AI regulation globally, as countries strive to balance innovation against safeguarding technology with wide-ranging applications that are advancing faster than many anticipated.

September 2024

Sam ten Cate
Product Business Risk Management

Matthew Sample
Emerging Technologies Governance Architect

AI regulation is beginning to take shape, with several countries having proposed AI frameworks, and comprehensive legislation in the European Union (EU) – the first of its kind in a major economy.

In July, regulators from the United States, United Kingdom and Europe issued a joint statement, outlining concerns about market concentration and anti-competitive practices in GenAI – the technology behind popular chatbots like ChatGPT.

“There are risks that firms may attempt to restrict key inputs for the development of AI technologies,” the regulators warned, highlighting the need for swift action in a rapidly evolving field, and added, “The AI ecosystem will be better off the more that firms engage in fair dealing.”

While the US, UK and EU authorities are unlikely to create unified regulations, their alignment suggests a coordinated approach to oversight. In the coming months, this could mean a closer examination of AI-related mergers, partnerships and business practices.

Given the breadth and depth of GenAI’s potential impact, lawmakers and regulators governing financial institutions’ use of the technology will need to consider a broad range of regulatory implications, including:

  • Safety and soundness
  • Consumer protection and fairness
  • Data privacy
  • Intellectual property
  • Employment rights

In this article, we outline other significant developments in AI regulation around the world in 2024 so far.

Regulatory implications

North America

This has been a bumper year for US government agencies publishing AI updates. In July, the National Institute of Standards and Technology (NIST), part of the US Department of Commerce, published an AI Risk Management Framework in response to a presidential executive order on AI from last October. It is a flexible, voluntary framework to aid organizations of all types and across sectors, to identify and manage the risks posed by AI.

The document “defines risks that are novel to or exacerbated by the use of GenAI.” After introducing and describing these risks, the document provides a set of suggested actions to help organizations govern, map, measure and manage these risks.

The core AI risk areas identified are:

  • CBRN information or capabilities: Eased access to or synthesis of materially nefarious information or design capabilities related to chemical, biological, radiological or nuclear (CBRN) weapons or other dangerous materials or agents.
  • Confabulation: The production of confidently stated but erroneous or false content (known colloquially as “hallucinations” or “fabrications”) by which users may be misled or deceived.
  • Dangerous, violent or hateful content: Eased production of and access to violent, inciting, radicalizing or threatening content as well as recommendations to carry out self-harm or conduct illegal activities. Includes difficulty controlling public exposure to hateful and disparaging or stereotyping content.
  • Data privacy: Impacts due to leakage and unauthorized use, disclosure or de-anonymization of biometric, health, location or other personally identifiable information or sensitive data.
  • Environmental impacts: Impacts due to high compute resource utilization in training or operating GenAI (GAI) models, and related outcomes that may adversely impact ecosystems.
  • Harmful bias or homogenization: Amplification and exacerbation of historical, societal and systemic biases; performance disparities between subgroups or languages, possibly due to non-representative training data, that result in discrimination, amplification of biases or incorrect presumptions about performance; undesired homogeneity that skews system or model outputs, which may be erroneous, lead to ill-founded decision-making or amplify harmful biases.
  • Human-AI configuration: Arrangements of or interactions between a human and an AI system which can result in the human inappropriately anthropomorphizing GAI systems or experiencing algorithmic aversion, automation bias, overreliance or emotional entanglement with GAI systems.
  • Information integrity: Lowered barrier to entry to generate and support the exchange and consumption of content which may not distinguish fact from opinion or fiction or acknowledge uncertainties, or could be leveraged for large-scale dis- and mis-information campaigns.

The standards outlined in the framework have attracted the support of the US banking industry. The Bank Policy Institute responded to the framework, saying:

“Banking organizations are supportive of the NIST AI Risk Management Framework… Similar to the concept of… other governance frameworks already in use in the banking industry that are applied across an organization, the AI RMF 1.0 notes that ‘governance is designed to be a cross-cutting function to inform and be infused throughout the other three functions.’ The AI RMF 1.0 also allows for a principles-based approach that can include many of the same factors that banking organizations already consider in their overall risk governance frameworks.”

In July, the Treasury Department published a request for information, “seeking public comment on the uses of AI in the financial services sector and the opportunities and risks presented by developments and applications of AI within the sector.”

The Congressional Research Service (CRS) – a nonpartisan research unit within the US Congress that provides elected officials with analysis on key areas of policy and legislation – published a report in April assessing various ways in which AI developments touch financial services. The paper also offers considerations for rule makers seeking to manage the implications of those touchpoints.

Among the most significant conclusions were:

  • Data-related policy issues
    In particular, the idea that existing security measures might not stand up to attempts by sophisticated AIs to uncover personal data about clients. For example, the paper states: “Concerns about data privacy have increased as models improve and may be able to accurately identify owners of previously anonymized data.” This puts at risk organizations’ abilities to comply with the Gramm-Leach-Bliley Act (GLBA), which, “Requires that financial institutions ensure the privacy and confidentiality of customers and protect against threats and unauthorized access and use of data… To comply, banks and financial institutions subject to GLBA typically anonymize or ‘deidentify’ their data before selling it.”
  • Concentration risk
    There are two elements to this concern:
  1. That the initial capital outlay of implementing AI across analysis and operations processes could be prohibitive to smaller market participants, creating barriers to entry and enabling larger firms to gather a larger market share.
  2. As a consequence of the above: “The concentration of human capacity and model development at a handful of firms means the number of firms using the same underlying model or data may create or exacerbate systemic risk.” The report specifically cites the risk of flash crashes based on “herding behavior.”
  • Market manipulation
    GenAI models could potentially teach themselves illegal market manipulation techniques such as “pinging” (“submitting an order into the market with the intention of determining participants’ intention of selling large quantities in the future and then selling prior to such large sales moving the markets down”) or “spoofing” (“a trader initiating a large buy order, thus driving market price for a security higher and placing a sell order at the higher price while cancelling the false buy order, thereby capitalizing on the sale at the elevated price”) as part of their learning models. The risk then arises when the GenAI models either do not concomitantly learn that these techniques are unlawful before employing them or disregard the laws if their programming allows them enough autonomy to “choose” to prioritize client outcomes over compliance.
  • Supervisory technology
    This is largely framed by the paper as a potential benefit of AI to financial services regulation. It notes, “Market regulators such as the CFTC [Commodity Futures Trading Commission] and SEC have begun using deep learning tools to detect market manipulation and money laundering... [the CFTC] has also suggested employing AI to evaluate the troves of data to which it has access – including from registrants, clearinghouses and public data – and to perform systemic monitoring in ways that may forestall crises.”

The Treasury also published a report in March, on managing AI risks in financial services, which reiterated the CRS’s concerns about larger companies leaving smaller ones behind in AI investment and creating a “capability gap,” thereby exposing smaller organizations to greater risk of fraud. It also identified several other key areas of risk, including:

  • The need for coordination between US agencies and international ones
  • More transparency about the data used to train AI models and how they use that data
  • Better training human workforces to understand AI
  • Setting common standards for AI terminology

US Acting Comptroller of the Currency Michael J. Hsu gave a speech in July, outlining a vision of a consortium-based “shared responsibility” model for financial services firms approach to AI.

At the more local level, the State Senate in California, the world’s biggest center for technology development – including AI – advanced legislation “to ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.”
 

Asia Pacific

Singapore’s central bank, the Monetary Authority of Singapore (MAS) focused on the cyber security side of AI in its most recent statement. In July, it published a paper, “Cyber Risks Associated with Generative Artificial Intelligence,” which focused on various areas, including data protection.

Among its recommendations for financial institutions were:

  • Repeated logging and monitoring of data collected by AIs: “Financial Institutions are encouraged to incorporate AI solutions with log management systems, which work by ingesting data points from devices throughout the network. These logs are then analyzed with machine learning models in real-time to help with faster anomaly detection, event correlation, making predictions and auto-remediation.”
  • Making spotting and responding to “deepfakes” (AI-generated replicas of human video) part of their cyber security training, protocols and wargaming. “Financial Institutions are encouraged to outline the steps to be taken in the event of a deepfake attack. This includes processes for reporting incidents, conducting investigations, communicating with stakeholders, and taking down deepfake content.” It cites real examples of deepfakes being used to impersonate real people to gain money from their financial accounts.
  • Using AI-generated malware detection systems to detect AI-generated malware, which it warns is becoming increasingly sophisticated. “Some malware were observed to use GenAI to help them implement polymorphism [malware that constantly changes its coding to avoid recognition] to bypass traditional signature-based filtering and evade detection.” MAS notes that some AI-based malware detection can learn to spot this.

The MAS has historically taken a collaborative approach to AI regulation, publishing guidelines via its Veritas consortium, which includes a number of financial services and technology organizations. Its most recent publication was in 2022.

China’s “Interim Measures for the Management of Generative Artificial Intelligence Services” came into force last year but, as their name suggests, they are not to be considered the government’s final word on AI. Primarily focused on broader social or economic implications of AI, such as misinformation, it does share the US’ focus on personal information protection. These rules have relevance for financial services organizations. For example, “The draft rules provide that where generative AI services involve personal information, the service providers are considered ‘personal information handlers’ under the PIPL (Personal Information Protection Law) and related laws.”

Similarly, Hong Kong’s Privacy Commissioner for Personal Data published in June an “Artificial Intelligence Personal Data Protection Framework,” which outlined “recommendations and best practices regarding governance of AI” for organizations, including financial services, “which procure, implement and use any type of AI systems.”
 

Europe

The EU’s AI Act became law in August and the full range of its rules will be enforced starting in August 2026. It primarily deals with consumer protection issues, including those related to retail financial services customers. The European Commission has also issued a consultation with the financial services industry, specifically, to get feedback on their planned uses for AI and perceptions of the risks involved.

In the UK, in April, the central bank, Bank of England (BoE) and the Prudential Regulation Authority (PRA) published a joint response to a government request for an update on their procedures and systems for regulating AI within their regulatory purviews. Another regulator, the Financial Conduct Authority (FCA) published a separate response at the same time.

All three regulators’ current focus is on incorporating AI directives into its existing rules, while making accommodations for new AI-focused rules in the future, if necessary.
 

What’s next?

With AI, it is easier to deny responsibility for bad outcomes than with any other technology in recent memory. Therefore, implications for trust are significant. In the banking and finance arena, developing the “shared responsibility” model outlined above for fraud, scams and ransomware attacks may provide a useful starting point for mitigating this.

If the past is any guide, the micro- and macro-prudential risks from AI will emanate from the overly rapid adoption of the technology without sufficiently developed controls. What starts off as responsible innovation can quickly snowball into a hyper-competitive race to grow revenues and market share, with a “we’ll deal with it later” attitude toward risk management and controls. In time, risks grow undetected or unaddressed until there is an eventual reckoning. We saw this with derivatives and financial engineering leading up to the 2008 financial crisis and with cryptocurrencies leading up to 2022’s crypto winter.

AI appears to be following a similar evolutionary path: Initially used to produce inputs to human decision-making, then as a co-pilot to enhance human actions and finally as an agent executing decisions on its own on behalf of humans. The risks and negative consequences of weak controls increase steeply as one moves from AI as input, to AI as co-pilot, and to AI as agent.

For banks interested in adopting AI, establishing clear and effective gates between each phase could ensure that innovations are helpful and do not become dangerous. Before opening a gate and pursuing the next phase of development, banks should make sure that proper controls are in place and accountability is clearly established.

 

Share

Stay updated

Please send me State Street’s latest Insights.