Regulatory update: An AI special edition
The advent of generative artificial intelligence (GenAI) – particularly the introduction of the unexpectedly powerful ChatGPT in late 2022 – spurred renewed and pressing focus on AI regulation globally, as countries strive to balance innovation against safeguarding technology with wide-ranging applications that are advancing faster than many anticipated.
September 2024
Sam ten Cate
Product Business Risk Management
Matthew Sample
Emerging Technologies Governance Architect
AI regulation is beginning to take shape, with several countries having proposed AI frameworks, and comprehensive legislation in the European Union (EU) – the first of its kind in a major economy.
In July, regulators from the United States, United Kingdom and Europe issued a joint statement, outlining concerns about market concentration and anti-competitive practices in GenAI – the technology behind popular chatbots like ChatGPT.
“There are risks that firms may attempt to restrict key inputs for the development of AI technologies,” the regulators warned, highlighting the need for swift action in a rapidly evolving field, and added, “The AI ecosystem will be better off the more that firms engage in fair dealing.”
While the US, UK and EU authorities are unlikely to create unified regulations, their alignment suggests a coordinated approach to oversight. In the coming months, this could mean a closer examination of AI-related mergers, partnerships and business practices.
Given the breadth and depth of GenAI’s potential impact, lawmakers and regulators governing financial institutions’ use of the technology will need to consider a broad range of regulatory implications, including:
In this article, we outline other significant developments in AI regulation around the world in 2024 so far.
North America
This has been a bumper year for US government agencies publishing AI updates. In July, the National Institute of Standards and Technology (NIST), part of the US Department of Commerce, published an AI Risk Management Framework in response to a presidential executive order on AI from last October. It is a flexible, voluntary framework to aid organizations of all types and across sectors, to identify and manage the risks posed by AI.
The document “defines risks that are novel to or exacerbated by the use of GenAI.” After introducing and describing these risks, the document provides a set of suggested actions to help organizations govern, map, measure and manage these risks.
The core AI risk areas identified are:
The standards outlined in the framework have attracted the support of the US banking industry. The Bank Policy Institute responded to the framework, saying:
“Banking organizations are supportive of the NIST AI Risk Management Framework… Similar to the concept of… other governance frameworks already in use in the banking industry that are applied across an organization, the AI RMF 1.0 notes that ‘governance is designed to be a cross-cutting function to inform and be infused throughout the other three functions.’ The AI RMF 1.0 also allows for a principles-based approach that can include many of the same factors that banking organizations already consider in their overall risk governance frameworks.”
In July, the Treasury Department published a request for information, “seeking public comment on the uses of AI in the financial services sector and the opportunities and risks presented by developments and applications of AI within the sector.”
The Congressional Research Service (CRS) – a nonpartisan research unit within the US Congress that provides elected officials with analysis on key areas of policy and legislation – published a report in April assessing various ways in which AI developments touch financial services. The paper also offers considerations for rule makers seeking to manage the implications of those touchpoints.
Among the most significant conclusions were:
The Treasury also published a report in March, on managing AI risks in financial services, which reiterated the CRS’s concerns about larger companies leaving smaller ones behind in AI investment and creating a “capability gap,” thereby exposing smaller organizations to greater risk of fraud. It also identified several other key areas of risk, including:
US Acting Comptroller of the Currency Michael J. Hsu gave a speech in July, outlining a vision of a consortium-based “shared responsibility” model for financial services firms approach to AI.
At the more local level, the State Senate in California, the world’s biggest center for technology development – including AI – advanced legislation “to ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.”
Asia Pacific
Singapore’s central bank, the Monetary Authority of Singapore (MAS) focused on the cyber security side of AI in its most recent statement. In July, it published a paper, “Cyber Risks Associated with Generative Artificial Intelligence,” which focused on various areas, including data protection.
Among its recommendations for financial institutions were:
The MAS has historically taken a collaborative approach to AI regulation, publishing guidelines via its Veritas consortium, which includes a number of financial services and technology organizations. Its most recent publication was in 2022.
China’s “Interim Measures for the Management of Generative Artificial Intelligence Services” came into force last year but, as their name suggests, they are not to be considered the government’s final word on AI. Primarily focused on broader social or economic implications of AI, such as misinformation, it does share the US’ focus on personal information protection. These rules have relevance for financial services organizations. For example, “The draft rules provide that where generative AI services involve personal information, the service providers are considered ‘personal information handlers’ under the PIPL (Personal Information Protection Law) and related laws.”
Similarly, Hong Kong’s Privacy Commissioner for Personal Data published in June an “Artificial Intelligence Personal Data Protection Framework,” which outlined “recommendations and best practices regarding governance of AI” for organizations, including financial services, “which procure, implement and use any type of AI systems.”
Europe
The EU’s AI Act became law in August and the full range of its rules will be enforced starting in August 2026. It primarily deals with consumer protection issues, including those related to retail financial services customers. The European Commission has also issued a consultation with the financial services industry, specifically, to get feedback on their planned uses for AI and perceptions of the risks involved.
In the UK, in April, the central bank, Bank of England (BoE) and the Prudential Regulation Authority (PRA) published a joint response to a government request for an update on their procedures and systems for regulating AI within their regulatory purviews. Another regulator, the Financial Conduct Authority (FCA) published a separate response at the same time.
All three regulators’ current focus is on incorporating AI directives into its existing rules, while making accommodations for new AI-focused rules in the future, if necessary.
What’s next?
With AI, it is easier to deny responsibility for bad outcomes than with any other technology in recent memory. Therefore, implications for trust are significant. In the banking and finance arena, developing the “shared responsibility” model outlined above for fraud, scams and ransomware attacks may provide a useful starting point for mitigating this.
If the past is any guide, the micro- and macro-prudential risks from AI will emanate from the overly rapid adoption of the technology without sufficiently developed controls. What starts off as responsible innovation can quickly snowball into a hyper-competitive race to grow revenues and market share, with a “we’ll deal with it later” attitude toward risk management and controls. In time, risks grow undetected or unaddressed until there is an eventual reckoning. We saw this with derivatives and financial engineering leading up to the 2008 financial crisis and with cryptocurrencies leading up to 2022’s crypto winter.
AI appears to be following a similar evolutionary path: Initially used to produce inputs to human decision-making, then as a co-pilot to enhance human actions and finally as an agent executing decisions on its own on behalf of humans. The risks and negative consequences of weak controls increase steeply as one moves from AI as input, to AI as co-pilot, and to AI as agent.
For banks interested in adopting AI, establishing clear and effective gates between each phase could ensure that innovations are helpful and do not become dangerous. Before opening a gate and pursuing the next phase of development, banks should make sure that proper controls are in place and accountability is clearly established.