For some industries, using AI and machine learning models is novel, but many industries–customer finance and insurance in particular–have been building, controlling, and using versions for decades. These businesses have well-developed governance practices built largely around algorithmic, rule-based, and other model technologies and regulations which predate AI models. However, AI/ML versions have unique technical and operational features that compound present governance challenges:
• Adoption of AI models is proliferating rapidly, across business lines, leading to incomplete or inaccurate model stocks.
• The”black box” characteristics of AI/ML algorithms limit insight to the predictive factors, which can be incompatible with version governance requirements which demand interpretability and explainability.
• The use of real-time data to produce real-time conclusions needs a new level of design performance tracking.
Also read: Challenges Of Responsible AI Development
• Increased algorithmic complexity and information usage is driving increased value from versions, but also greatly increases business risk and regulatory exposure, especially because the scale of model use exceeds the capability of people to monitor them.
A number of the enterprises I talk to are revisiting their version operationalization and governance processes and strengthening them with new capabilities to accommodate the greater use of AI/ML technology. There are four Important areas that enterprises are focusing on to ensure appropriate governance and risk control for their AI/ML units:
1. Enterprise-Level Model Inventory
You can’t govern what you can not view, so every version hazard management (MRM) program must start with a centralized model inventory that includes all of the metadata associated with each model throughout its life cycle, from development to installation, retirement, and alteration.
This information is not static and has to be continually upgraded –and readily accessible to satisfy audit requirements–during the model’s life span.
2. Enterprise-Level And Standardized Model Life Cycle (MLC) Management
Every version is unique and needs to be managed according to its own model life cycle. But effective governance of models within a venture demands a consistent method of defining, executing, tracking, and reporting on design life cycles. Standardization doesn’t apply one technical or operational strategy, but rather enables the unique development, deployment, operationalization, and governance aspects for every model and its own MLC to be fully captured and automated in a consistent, efficient, and transparent method.
The need for standardized MLCs is getting more serious given the growing variety of development platforms and version factories available from software vendors, open-source jobs, and cloud services operators. As data scientists produce models faster and citizen data scientists participate in creating business unit-specific versions, it is simpler for different groups to have distinct processes for operationalizing models–which makes the governance challenge that much tougher.
Standardizing MLCs has advantages beyond governance: It reduces deployment delays and gets models into production quicker. Possessing an enterprise-wide strategy to MLC definition and direction is also vital to maintaining security as enterprises increasingly adopt cloud services. When managing practices and processes which use on-premises tools, the IT team has typically controlled usage and access, giving a line of defense for enforcing consistency. Given that the demand for greater safety and access control in using cloud solutions, having well-defined model life cycles and based processes around them is much more important.
3. Enterprise-Level Production Model Monitoring And Model Operations
Tracking of AI/ML models is increasing beyond the human scale in many businesses. Monitoring begins when a model is implemented in manufacturing systems for real small business use and continues through retirement (and outside for historical archive functions ). Tracking includes verifying internal and external information inputs, documenting schema changes, tracking statistical performance and information ramble, and ensuring that the model performs within the operational and business parameters set for it.
Since each model is unique, monitoring frequency generally varies for each model. For monitoring to be most effective, it ought to include alerts and notifications of possible upcoming performance difficulties, and it needs to monitor and log the remediation measures until version health and functionality are reinstated.
By way of example, detecting model drift is not enough. Monitoring workflows needs to join with workflows for retraining, retesting, or alternative corrective actions as required, initiating change requests, and gating activities that need approvals.
4. Model Life Cycle Automation
Automation is essential to successful model operations and governance gave greater complexity and quantity of AI/ML models. Automation provides the capacity to orchestrate and manage and enforce every measure in each model’s life cycle, providing the management supervision necessary for ongoing operations and decent governance and risk management.
A well-designed design life cycle will leverage, not replicate, the capacities of the company and IT systems involved in developing models and maintaining model health and dependability. Including integrating with version development platforms, change management methods, source code management systems, data management systems and infrastructure management systems, and design risk management methods. Duplicating some of the work of those systems introduces unnecessary attempts, errors, and risk.