What the India AI Governance Guidelines Cover

The India AI Governance Guidelines, published on 5 November 2025 by MeitY under the IndiaAI Mission, provide a principles-based voluntary framework for the safe and responsible adoption of AI in India. The guidelines are not a compliance requirement per se — India has consciously chosen a light-touch approach over a dedicated AI Act at this stage. They are a foundational policy reference that sector regulators are expected to cite when issuing binding AI mandates.

The framework introduces a three-tier risk classification for AI systems, designed to be proportionate to the potential for harm:

Risk TierDefinitionBFSI ExamplesRecommended Practices
HIGH Decisions that directly affect individual rights, financial inclusion, or systemic stability Credit scoring, loan decisioning, insurance underwriting, algo trading Bias audit (recommended quarterly), explainability documentation, human review override, board-level AI governance
MEDIUM Automated recommendations with significant downstream impact Fraud alert systems, KYC document verification, customer service bots Performance monitoring, bias testing (annually), incident documentation
LOW Assistive tools with minimal autonomous decision-making Summarisation tools, internal process automation, search Register in AI inventory, periodic review
These are recommendations, not mandates The tier classifications are a voluntary framework. However, they are increasingly cited by RBI, SEBI, and IRDAI as the baseline expectation when issuing their own sector-specific guidance. Aligning to the India AI Governance Guidelines now positions BFSI organisations ahead of forthcoming binding mandates from sector regulators.

Where Binding AI Obligations for BFSI Actually Come From

For Indian BFSI entities, the enforceable AI-related obligations come from sector regulators, not from MeitY directly. Understanding this distinction is critical to scoping your compliance programme correctly:

RBI: FREE-AI Framework

The Reserve Bank of India’s Framework for Responsible and Ethical Enablement of AI (FREE-AI) is the most significant source of binding AI governance obligations for banks and NBFCs. It requires documented AI risk assessment, model explainability, bias testing, and board-level oversight for AI systems used in credit decisioning, fraud management, and customer-facing applications. FREE-AI obligations are enforceable through RBI’s supervisory review process.

SEBI: Algorithmic Trading Circulars

SEBI’s circulars on algorithmic trading impose binding explainability and audit trail requirements on AI systems used in strategy generation and execution. AI systems used in portfolio management also fall under SEBI’s oversight. For strategy-generating AI (not just execution), SEBI expects sufficient explainability to reconstruct the rationale for any trade flagged in a regulatory inquiry.

IRDAI: Cybersecurity and AI Guidance

IRDAI’s revised guidelines (March 2025) address AI systems used in underwriting, claims processing, and customer service. Insurers using AI in underwriting are expected to maintain bias audit records and produce them on demand during IRDAI inspections. This is the closest to a direct AI bias audit mandate in the insurance sector.

The Layered Picture India AI Governance Guidelines (voluntary baseline) + RBI FREE-AI / SEBI circulars / IRDAI guidance (binding, sector-specific) = the complete AI compliance picture for Indian BFSI. The India AI Governance Guidelines are the national policy direction; sector regulators translate them into enforceable obligations. Aligning to the guidelines satisfies both layers simultaneously.

Why BFSI Has the Highest Alignment Priority

Even though the India AI Governance Guidelines are voluntary, BFSI organisations have the strongest practical reason to align with them proactively. The sector sits at the intersection of all three risk amplifiers in the guidelines: scale (millions of decisions per day), irreversibility (a denied loan or flagged account has real-world consequences), and displaced human oversight (credit models often run with minimal underwriter review).

More importantly, BFSI is the sector where sector regulators are most actively developing AI-specific mandates. Early alignment to the India AI Governance framework means that when RBI, SEBI, or IRDAI formalise their requirements, BFSI organisations are already positioned — rather than scrambling to implement new processes under regulatory pressure.

Intersection with SEBI CSCRF IS.3

SEBI CSCRF includes IS.3 (Information Security — AI Systems), which requires documented risk assessment and periodic review for AI systems used in market infrastructure. SEBI CSCRF IS.3 and the India AI Governance Guidelines share the same underlying logic — risk-tiered AI governance with audit trails — making them complementary rather than duplicative.

Practical Cross-Reference India AI Governance HIGH-risk bias audit recommendation (quarterly) maps to SEBI CSCRF IS.3 periodic AI risk review (annual). A single integrated AI governance programme satisfies both — but must explicitly cross-reference both frameworks in board papers and the CSCRF self-assessment submission.

How to Build Your AI Governance Baseline in 90 Days

Most BFSI CISOs are building AI governance infrastructure from scratch. The practical 90-day approach follows the India AI Governance framework’s logic while simultaneously satisfying RBI FREE-AI documentation requirements:

Days 1–30: AI System Inventory and Risk Classification

Catalogue every AI system in production. For each system, apply the India AI Governance risk tier criteria: (1) Does this system make or heavily influence decisions about specific individuals? (2) Are those decisions consequential and hard to reverse? (3) Is there meaningful human review before the decision executes? Any system that scores yes/yes/no is HIGH risk under the guidelines — and likely falls under RBI FREE-AI or IRDAI binding obligations as well.

Days 31–60: Bias Testing Protocol for HIGH-Risk Systems

For each HIGH-risk system, establish a bias testing methodology. Define the protected characteristics to test (at minimum: gender, geography, income segment). Select the appropriate fairness metric — demographic parity, equalised odds, or individual fairness — for the use case. Document the acceptable threshold and the escalation path when a model exceeds it. This documentation is what satisfies both the India AI Governance recommendation and the RBI FREE-AI evidence expectation.

Days 61–90: Board AI Governance Charter

Draft a board-level AI governance charter. It should define ownership of each HIGH-risk system, audit cadence, incident reporting thresholds, and the model retirement policy. The board does not need to understand the technical details but must formally receive and minute the AI risk review outputs. This board paper is the primary artefact an IRDAI inspector or RBI supervisor will ask for.

The Explainability Gap Many BFSI AI systems can produce SHAP or LIME feature importance output — but that is not what regulators ask for. The India AI Governance Guidelines and RBI FREE-AI both expect a plain-language explanation a subject or senior manager can understand. Build the plain-language layer into your model governance workflow, not as an afterthought.

RiskSage India AI Governance Pack

RiskSage ships a dedicated India AI Governance Pack as part of its Unified Control Library. The pack maps 10 UCL controls (under the K_AI_GOVERNANCE CRQ category) to the India AI Governance framework recommendations, cross-referenced to RBI FREE-AI and SEBI CSCRF IS.3 where binding obligations apply:

UCL ControlFramework AlignmentEvidence Collected
AI.INV.1AI system inventory & risk classification (India AI Gov + RBI FREE-AI)System register with tier classification, last review date
AI.BIAS.1Bias audit for HIGH-risk systems (India AI Gov guidance, IRDAI binding)Bias audit report, fairness metrics, protected characteristics tested
AI.BIAS.2Annual bias testing for MEDIUM-risk systems (India AI Gov guidance)Test results, methodology documentation
AI.EXPL.1Plain-language explainability (India AI Gov + RBI FREE-AI)Model card with plain-language summary, technical output link
AI.OVRD.1Human review override (India AI Gov guidance)Override log, SLA compliance rate
AI.BOARD.1Board-level AI governance charter (India AI Gov + RBI FREE-AI)Signed charter, board minutes referencing AI review
AI.INC.1AI incident documentation (India AI Gov guidance)Incident log, response timeline
AI.SEBI.1SEBI CSCRF IS.3 cross-reference (SEBI binding)IS.3 self-assessment, dual-framework mapping table
AI.RET.1Model retirement policy (India AI Gov guidance)Retirement checklist, data handling confirmation
AI.VEND.1Third-party AI vendor risk (India AI Gov + RBI third-party risk)Vendor AI risk assessment, contractual obligations log

Each UCL control is tagged with its binding vs. advisory status, so your compliance team always knows which controls are mandatory (RBI, SEBI, IRDAI) and which represent voluntary framework alignment (India AI Gov). The K_AI_GOVERNANCE category feeds into the CISO Command Dashboard’s AI risk posture widget.

Explore RiskSage’s India AI Governance Pack

10 UCL controls aligned to the India AI Governance Guidelines and cross-referenced to RBI FREE-AI, SEBI CSCRF IS.3, and IRDAI — with clear mandatory vs. advisory tagging built in.

Explore RiskSage India AI Gov Pack →