When the RBI’s FREE-AI committee invited me to share testimony during the shaping of its report, I came away with a deep appreciation for the thoughtfulness behind its approach. Now that the report is public, I find myself reflecting on how it compares with another landmark development in AI regulation: the European Union’s AI Act.
At first glance, the two frameworks seem to emerge from very different worlds. FREE-AI tailored for financial services in India, a fast growing middle-to-low income economy with a large youth population under a rapid transition riding on the back of strong digital adoption. The EU AI Act covering Europe, a high-income economy with strong institutions with a relatively aging population. Clearly, the contemporary priorities of these two economies may diverge considering the disparate demographic profile and economic challenges. And yet, both are ultimately trying to answer the same question: how do we let AI flourish while keeping it safe, fair, and trustworthy leading to a better social good?
Two Approaches, One Goal
Imagine AI governance as two roads heading toward the same horizon: “trustworthy intelligence”; but born from very different institutional frameworks designed to prioritize public interest.
The European Union AI Act
The European Union’s AI Act is the world’s first comprehensive, legally binding AI regulation (EU AI Act Explorer) to foster an environment safe for society underscoring the fair market competition across the firms involved in AI innovation. Their regulation journey started in 2021 with the The European Commission’s draft and, after years of debate, culminated in Regulation (EU) 2024/1689, formally adopted in July 2024. Its design is broad and horizontal, covering all AI systems across all sectors with the goal of creating a single rulebook for Europe.
At its core is a risk-based framework:
- Unacceptable risk AI (e.g., social scoring, manipulative systems) is outright banned.
- High-risk AI (e.g., credit scoring, healthcare devices, biometric ID) faces strict obligations: transparency, explainability, risk management, human oversight, and post-market monitoring.
- Limited and minimal-risk AI systems (like spam filters or recommendation engines) face lighter rules, often just disclosure obligations.
The Act also introduces a regulatory infrastructure: the EU AI Office, a network of national regulators, and mandatory regulatory sandboxes (one per Member State by 2026) to help innovators test systems before release.
India’s tryst with AI regulation
The AI regulation effort in context of the usage of AI in financial/banking sector is spearheaded by the innovation wing of banking sector regulator of India the Reserve Bank of India’s which pioneered the FREE-AI framework: short for Framework for Responsible, Efficient, and Explainable AI in the Financial Sector (RBI FREE-AI Report). Unlike Europe’s horizontal model, India has chosen a sector-specific approach; focused squarely on financial services, where risks are immediate but opportunities are massive.
The FREE-AI framework is built on seven Sutras (guiding principles): Trust, Efficiency, Explainability, Proportionality, Fairness, Inclusivity, and Safety. These are operationalized through six pillars, including:
- Sandbox-led experimentation for innovation.
- Shared infrastructure and datasets (leveraging India’s digital stack—Aadhaar, DigiLocker, GSTN, and land records).
- Governance at the board level AI policies must be board approved.
- Audit trails and disclosures to ensure explainability.
- Proportionate regulation, lighter rules for inclusion pilots, stricter rules for systemic use.
Unlike the EU, the RBI’s philosophy is not to prescribe rigid “dos and don’ts” but to allow graded liability and tolerant supervision, so that early mistakes in innovation do not crush experimentation. The emphasis is on inclusion-first deployment, recognizing India’s rural demographics and the potential of AI to extend credit, insurance, and financial literacy to the bottom of the economic pyramid.
Thus, while both roads are heading toward the same destination of ‘trustworthy, responsible AI’, they are born from contrasting contexts: Europe’s push for standardized risk-based compliance versus India’s drive for
The FREE-AI framework is built on seven Sutras (guiding principles): Trust, Efficiency, Explainability, Proportionality, Fairness, Inclusivity, and Safety. These are operationalized through six pillars, including:
- Sandbox-led experimentation for innovation.
- Shared infrastructure and datasets (leveraging India’s digital stack—Aadhaar, DigiLocker, GSTN, and land records).
- Governance at the board level AI policies must be board approved.
- Audit trails and disclosures to ensure explainability.
- Proportionate regulation, lighter rules for inclusion pilots, stricter rules for systemic use.
Unlike the EU, the RBI’s philosophy is not to prescribe rigid “dos and don’ts” but to allow graded liability and tolerant supervision, so that early mistakes in innovation do not crush experimentation. The emphasis is on inclusion-first deployment, recognizing India’s rural demographics and the potential of AI to extend credit, insurance, and financial literacy to the bottom of the economic pyramid.
Thus, while both roads are heading toward the same destination of ‘trustworthy, responsible AI’, they are born from contrasting contexts: Europe’s push for standardized risk-based compliance versus India’s drive for inclusion-led, experimental governance.
Where paths converge
Even though FREE-AI and the EU AI Act emerge from different contexts, both roads share key principles:
- Sandboxes as safe launchpads. The EU mandates at least one AI regulatory sandbox per Member State by August 2026; controlled environments to test, learn, iterate before full deployment (Artificial Intelligence Act EU). Similarly, the RBI scheme invites financial entities: fintechs, REs to experiment within sector-specific testing zones, with learnings feeding policy refinement.
- Governance and accountability. The EU enforces extensive transparency, documentation, human oversight, and post-market monitoring for high-risk systems. RBI echoes this through board-approved AI policies, audit checklists, and public disclosures of AI use.
- SME-friendly design. EU regulatory sandboxes explicitly lower barriers for SMEs and startups (free access, simplified processes), while RBI enables low-threshold compliance for small-ticket offerings to prioritize inclusion, helping underserved segments leap ahead.
Where paths diverge

At the same time, where the paths fork in the terrain, their differences highlight the priorities and the imagined future realities they aim towards.
Scope & Design
First of, the approach by India of letting AI use in each sector (in this case, the financial sector) to be mainly governed by the traditional regulator for the sector (i.e. the RBI) reflects a recognition that at the end of the day tools are only means, outcomes and intents by the actors using the tools matter -which in turn are best understood by industry bodies and regulators. As a result FREE-AI is laser-focused on financial services, calibrated to India’s inclusion and innovation pace.
On the other hand, the EU AI Act attempts to achieve a horizontal approach; covering all sectors and all providers. It is industrial-strength, one-size-fits-all regulation. The result is a loss of opportunity to strategically enable AI in pockets and subsectors where its use can result in social good, rather than harm.
Compliance Philosophy
FREE-AI opts for “graded liability” and tolerant supervision for first-time errors when safeguards exist, a learn-as-you-go principle encouraging experimentation. This is especially important given AI is still a fast evolving technology, where today’s apparent limitations may tomorrow generate market-driven technical solutions -not requiring the heavy hand of the law. Smaller experiments with genuinely lofty goals getting a lesser rap over deliberate systemic abuse is a wonderful way to ensure they emerge, while limiting possible large scale damage.
EU employs prescriptive rules, regulatory white-lists and bans, with enforcement backed by fines and market surveillance. Such an approach would have been appropriate in an environment where AI has stabilized into well-understood bounds of capabilities in terms of what it can vs cannot do. But in a world where new solutions to previously vexing problems are getting innovated almost every other week, the EU Act is either doomed to be archaic on arrival, or in a continuous cycle of upgrades to keep it relevant -neither ideal for overall social good.
Foundation-Model Rules
The EU AI Act attempts to draw lines in the sand for AI capabilities based on hardware limits. It regulates GPAI (General-Purpose AI) using compute-based thresholds. Models crossing ~1023 FLOPs are considered powerful GPAI, and beyond ~1025 FLOPs, they may be designated systemic risk, triggering lifecycle-wide obligations from documentation to safety frameworks.
This poses several challenges, not the least being the pace of technological advancements -the Moore’s law famously held for decades increasing CPU capability by a factor of 2X every 1.5 yrs resulting in today’s laptops being 1000s of times more capable that the entirety of all compute used to send a man to the moon. If anything, with GPUs (and GPU clusters) having the leeway to scale not only at the semiconductor node geometry level, but also on parallelism of compute within, and beyond, the chips and server levels, such limitations are likely to make such lines in the sand obsolete one day or the other.
The RBI’s framework takes an entirely different approach, avoiding such technical thresholds. Its focus is on deployment context rather than compute power, especially aimed at enabling inclusion, not restricting innovation. This flexibility, while perhaps introducing more objectivity in the calls to be made at the time of enforcement, has produced a smart overall framework that is more likely to hold the test of time.
The Trade-offs
From a regulatory deterrence vs innovation enablement lens, each path has trade-offs.
FREE-AI:
✅ Pros
- Inclusion-first: Designed to democratize AI access for farmers, MSMEs, and rural users.
- Flexible experimentation: Sandbox + graded liability encourages innovation without heavy penalties.
- Contextual data advantage: Digital datasets provide locally relevant training data.
- Lower entry barriers: Open, public infrastructure enables startups and small fintechs to compete.
- Public trust: State-backed governance reassures first-time digital users in rural areas.
- Adaptive: Rules evolve iteratively with tech rather than being locked in legislation
❌ Cons
- Data quality issues: AI models are limited by their training data. If public datasets are biased, incomplete, or outdated, the AI will underperform, which underscores the need for robust AI governance in the public sector.
- Sandboxing limitations: Sandboxing works well when offline tests accurately reflect real-world outcomes. However, it can be less effective for systems relying heavily on human-AI interaction, as testers cannot always perfectly simulate genuine end-user behavior.
- AI use declaration mandates: Mandating that users can opt-out of AI systems can be counterproductive. For instance, if customers in default opt-out of AI collection bots, delinquency rates could rise, undermining the very financial stability the regulations aim to achieve.
EU AI Act:
✅ Pros
- Strong consumer protection: Strict risk classification prevents unsafe or discriminatory AI deployment.
- Legal certainty: Clear obligations (especially for high-risk AI) reduce ambiguity for companies.
- Ethical safeguards: Bans on unacceptable uses (e.g., social scoring, certain biometric surveillance).
❌ Cons
- Innovation burden: Heavy compliance costs may discourage AI developments and innovation that may generate far more overall social good than the imagined apparent harm the Act is designed to prevent.
- One-size-fits-all challenge: Risk classification may not fit every sector or application smoothly.
- Slow adaptability: Legal frameworks are harder to update than sandbox-based governance.
- Competitive disadvantage: Over-regulation could slow Europe’s AI ecosystem vs. US/China/India.
- Compliance bureaucracy: Documentation, audits, and conformity assessments may stifle agility.
- Risk of regulatory arbitrage: Companies may shift AI R&D outside the EU to avoid strict obligations.
📌 In short: FREE-AI maximizes access, inclusion, and flexibility, but risks some level of objectivity in its application by the regulators. EU AI Act maximizes safety, legal clarity, and ethics, but risks slowing innovation and creating regulatory drag.
References:
EU AI Act
RBI FREE-AI Report
AI & Lending (YouTube)
White House AI Bill of Rights
L&T Finance