Jo Ann Barefoot
Co-Founder and CEO
Alliance for Innovative Regulation
Behnaz Kibria
Director, Government Affairs and Public Policy,
Google Cloud
October 24, 2024
Historically, regulators and the financial services industry have developed various model risk management (MRM) frameworks to address the potential risks that arise from the use of models in decision-making. These principles-based frameworks typically encompass:
Model validation – A rigorous assessment of a model’s accuracy, reliability, and limitations. This often involves testing the model with various datasets and scenarios to ensure it performs as expected and identifies any potential biases or weaknesses.
Governance – Clear roles and responsibilities for model development, implementation, and monitoring. This often includes establishing processes for approving models, tracking changes, and ensuring ongoing oversight.
Risk mitigation – Identifying and managing potential risks, such as model bias, data quality issues, and misuse. This often involves developing strategies to address risks, such as implementing bias detection techniques, data quality checks, and user access controls.
Our previous paper, written in partnership with Google Cloud, sought to assess the relevance of MRM for AI and machine learning (ML) models. Our latest joint paper expands on that foundation by exploring how MRM frameworks and established governance practices can be applied to manage risks in gen AI contexts.
Specifically, the paper proposes that regulators acknowledge best practices, provide enhanced regulatory clarity, and establish expectations in the following four areas: 1) model governance; 2) model development, implementation and use; 3) model validation and oversight; and 4) shared responsibility in third-party risk management.
Gen AI has the potential to contribute significantly to the economy, with estimates suggesting an addition of up to $340 billion annually to the banking sector alone. Financial institutions are already taking advantage of gen AI-based solutions to enhance efficiency, increase productivity among employees, improve customer engagement, and mitigate fraud and security risks.
Gen AI distinguishes itself from traditional AI by moving beyond analysis and prediction to creating new content. These models utilize probabilistic assessments, meaning they don’t produce a single definitive output but rather a range of possibilities based on the patterns they’ve learned. This capability unlocks new potential for human-computer interaction, allowing for more dynamic and creative applications.
While gen AI applications offer significant potential benefits, the technology also has unique characteristics and risks that should be assessed and mitigated. Importantly, existing MRM frameworks, designed to ensure the reliability and transparency of financial models, are flexible enough to accommodate gen AI deployment within financial institutions.
To help mitigate uncertainty as to how model risk can be managed to account for these unique aspects of gen AI, regulators could anchor to industry best practices and standards that they consider strong – perhaps presumptive – evidence that the requirements of MRM frameworks have been met.
Our new paper posits that clear governance frameworks that define roles, responsibilities, and accountability will be essential for effective oversight of gen AI. We highlight three key topics where additional regulatory clarity can benefit all stakeholders:
Documentation requirements – We recommend updating and clarifying model risk management guidance to specify documentation expectations for gen AI models.
Model evaluation and grounding – We recommend that regulators take into account developers’ use of practices such as grounding and outcome-based model evaluations, in addition to model explainability and transparency, in establishing the safety and soundness of gen AI-based models.
Controls for safe and sound AI implementation – We recommend that regulators recognize a set of controls, including continuous monitoring, robust testing protocols, and human-in-the-loop oversight, that are appropriate for ensuring the responsible deployment of gen AI in financial services.
In this process, collaboration between industry participants, regulators, and governmental bodies will be key. While the path forward involves navigating complex regulatory and ethical landscapes, the collective commitment to responsible innovation and adherence to robust model risk management practices will be pivotal in realizing the full potential of gen AI in financial services and beyond.
WHITE PAPER • Adapting Model Risk Management for Financial Institutions in the Generative AI Era
AIR and Google Cloud jointly released this paper exploring how model risk management (MRM) frameworks and established governance practices can be applied to manage risks for financial institutions in gen AI contexts.
WHITE PAPER • AIR and Google Release Paper on Applying Model Risk Management Guidance to AI
The paper argues that MRM guidance, given its broad, principles-based approach, continues to provide an appropriate framework for assessing financial institutions’ management of model risk, even for Risk AI/ML models.
WHITE PAPER • AI: Transforming the Future or Triggering Fear?
Authored by AIR Senior Advisor and former global regulator F. Christopher Calabia, this paper examines the transformative potential and risks associated with the deployment of generative artificial intelligence (GenAI) in the financial services sector.
Stay informed by joining our mailing list