Privacy in the Age of AI: New Frameworks for Data Collaboration-Part-2

This is a two part blog series. The following is the second part.

In Part 1, we traced how data collaborations are being reimagined, and laid out the conceptual foundations. From redefining consent through the Account Aggregator framework, to recognizing the limits of consent. We explored how privacy-preserving frameworks like differential privacy protect individuals even when models are built from data; how electronic contracts replace slow, manual agreements with enforceable digital rules; and how confidential clean rooms combine secure hardware and privacy guarantees to enable computation without revealing raw data.

In Part 2, we explore how these building blocks come together in practice.

The Connective Tissue: Data Collabs

Technology alone cannot guarantee privacy, fairness, or effective collaboration. Data-sharing ecosystems need institutional scaffolding — entities that can operationalize trust, manage relationships, and abstract away complexity for participants.

This is where Data Collaboratives (or Data Collabs for short) come in.

A Data Collab isn’t a regulator or a government body. Rather, it is a facilitator organization — a neutral yet entrepreneurial entity that enables, orchestrates, and sustains data collaborations using the DEPA Framework behind the scenes, following its standards and processes set by trusted bodies like an Self-Regulatory Organization (SRO) and a Technology Standards Organization (TSO).

You can think of a Data Collab as the connective tissue of a data ecosystem — linking data providers, data consumers, and service providers.

In practice, a Data Collab:

  1. Provides tools and interfaces for participants to register, onboard, sign electronic contracts, and set up secure collaboration environments such as CCRs.
  2. Signs agreements with data providers to clean, prepare, and catalogue datasets so that they can be safely shared with authorized data consumers.
  3. Manages the flow of value — usually collecting payments from data consumers and distributing them fairly to data providers, while covering operational costs.
  4. Assumes accountability for ensuring that all interactions, permissions, and computations are compliant with the DEPA rules and contractual terms.
  5. Adds value beyond infrastructure — offering domain expertise, workflow design, governance and audit support — streamlining data collaborations.

Data Collabs will likely take different forms depending on the domain they serve. For example, some might focus on oncology research, others on financial fraud detection or climate-risk modeling. Each field has its own kinds of data, privacy rules, and ways of working — so it is natural for Data Collabs to specialize.

Because running these collaborations requires significant operational and technical effort, most Data Collabs will probably be for-profit enterprises. At the same time, because they operate on open, interoperable digital public infrastructure like DEPA, they are not monopolistic platforms. Instead, they enable a competitive marketplace where multiple Data Collabs can coexist, offering participants better choices, fairer pricing, and higher-quality services.

In this way, Data Collabs create a persistent institutional layer for responsible data use; enabling long-term, multi-party cooperation that would be impractical to coordinate through ad hoc agreements.

A real-world example: Accelerating Drug Discovery

Imagine three pharmaceutical companies, each developing treatments for the same rare disease. Each has conducted clinical trials with a few hundred patients — but individually, none has enough data in quantity, diversity, or parameter richness to train a robust predictive model of treatment response. 

Much like pieces of a puzzle, valuable insights often emerge only when data from different sources fit together — yet no single party should hold or see the entire picture.

If these companies could combine their datasets, and enrich them with other sources like gene expression profiles, cell imaging results, or public molecular databases, they could uncover deeper patterns and dramatically speed up drug discovery.

But three major barriers stand in their way:

  1. Competitive concerns: Each company treats its clinical data as proprietary and doesn’t want to reveal it to others.
  2. Privacy regulations: Patients gave consent only to the company that ran their trial — not to share data across firms.
  3. Practical limits: Many patients can’t be re-contacted to renew consent, making manual legal processes infeasible.

This is where the DEPA Framework fits in. Here’s how it would work:

A Data Collab is formed for long-term drug discovery collaborations. It signs electronic contracts with each company, defining rights, responsibilities, and permitted use of data. It handles registration, onboarding, and compliance checks through standardized interfaces.

Electronic contracts set out the exact terms of collaboration — specifying each party’s role, the artefacts they contribute, and the rules that govern privacy, usage, and value-sharing.

Each company uploads its encrypted trial data or model into a Confidential Clean Room. Data inside the CCR is decrypted only after checks confirm that all security and compliance conditions are met.

Data is programmatically joined and enriched within the CCR, followed by AI model training using privacy-enhancing techniques like differential privacy, which appropriately bound the chance of re-identifying patients.

Only the final trained model and its accompanying logs — never the underlying data — leave the CCR. The model can be decrypted solely by the authorized data consumer(s) (i.e. the modellers), protecting their trade secrets.

Auditors can review logs and trace the provenance of all artefacts at any time — via the DEPA AI Chain — to verify compliance and resolve disputes.

This framework delivers several benefits for all concerned stakeholders:

  • For society: Promising treatments reach patients faster, while a reusable governance and technology blueprint emerges for future biomedical collaborations. 
  • For the economy: A new data-driven economy is unlocked, enabling novel business interactions and boosting meaningful economic activity.
  • For companies: They can innovate together without exposing trade secrets or breaking regulatory rules, expanding what’s possible in research and development.
  • For regulators and auditors: Every transaction leaves a verifiable trail, simplifying oversight and boosting trust in the ecosystem.

Summing up

India’s journey toward responsible data use has been progressive and layered.

  • It began with the Account Aggregator framework — making consent Open, Revocable, Granular, Auditable, Notifying and Secure (ORGANS principle).
  • For model training and analytics, Privacy-Enhancing Technologies (PETs) — such as Differential Privacy — introduce mechanisms like the privacy budget to safeguard individuals while enabling learning.
  • To make collaboration faster and more reliable, Electronic Contracts replace traditional paper/PDF agreements with machine-readable, enforceable commitments — cutting through the friction of slow legal processes.
  • Confidential Clean Rooms (CCRs) operationalize these safeguards — enabling computation on sensitive data.
  • Finally, Data Collaboratives weave all these elements together — creating institutional and economic frameworks that make responsible, long-term data collaboration practical and sustainable.

This is the next frontier of Digital Public Infrastructure for AI — proving that protection and innovation are not opposites. With the right frameworks, we can have both.

Read Part 1: Privacy in the Age of AI: New Frameworks for Data Collaboration-Part-1

Please note: The blog post is authored by our volunteers, Hari Subramanian and Sarang Galada

For more information, please visit: https://depa.world/

Privacy in the Age of AI: New Frameworks for Data Collaboration-Part-1

This is a two part blog series. The following is the first part.

Every day, we generate vast amounts of digital data — withdrawing cash, visiting doctors, ordering groceries, using various mobile apps. These data trails have the potential to streamline services, personalize experiences, and drive breakthroughs in fields from medicine to finance. Yet they also carry risks: unfair profiling, intrusive targeting, and exposure of sensitive personal information.

This presents a fundamental challenge: How can we harness the value of data while preserving individual privacy?

Understanding Privacy

In the age of AI, privacy violations no longer just expose personal information. They erode autonomy and tilt power toward those who control data and algorithms. As AI systems harvest behavioral cues, digital footprints, and social networks, people lose control, not just over their information, but also over how they are profiled and influenced. This enables subtle yet pervasive forms of coercion, from tailored manipulation of choices to algorithmic exclusion from opportunities.

At scale, such surveillance dynamics erode trust and weaken democratic agency. In this era, privacy is not merely about secrecy, it is a precondition for freedom, dignity and meaningful participation in society.

Privacy is often mistaken for confidentiality, but it’s not simply about hiding information. Privacy is the property of not being able to identify individuals from the signals they produce. Confidentiality, on the other hand, is about limiting access to those signals in the first place. To protect privacy and confidentiality while respecting individual autonomy, we need strong control mechanisms that let people decide what data is shared, with whom, for what purpose, and for how long.

And privacy isn’t a one-time setting. Data moves through a lifecycle — it is collected, used, stored, reused, and eventually deleted. These protections must hold at every stage, or they are lost.

The Mechanics of Consent

Today, consent remains the most common mechanism for privacy — the basic control primitive intended to let people decide how their data is collected, shared, and used. The concept of consent actually predates the digital era — it began in a paper-based world, where signatures and written permissions served as the primary means of authorizing data use. 

It is important to distinguish between two kinds of consent:

  1. Consent to collect data – allowing an entity to initially gather your data (for example, an app accessing your camera).
  2. Consent to share data – granting permission for that data to be used or passed on for a specific purpose (for example, a bank sharing your salary details with a loan underwriter).

Our focus in this article is on consent to share data, since that is where both the greatest privacy challenges and the most meaningful opportunities for value creation lie.

Here is the problem with how consent is currently implemented today. Under frameworks like GDPR, consent has been defined as a very coarse-grained and blunt artifact. The same entity collects your data, gathers your consent, and enforces the rules around its use. For individuals, this typically means an all-or-nothing choice — share everything or nothing at all. And for innovators, it stifles the ability to responsibly explore new uses of data.

India’s Innovation: Unbundling Consent

When India designed its Account Aggregator system for financial data sharing, it chose a different path. Consent to share data was unbundled into two parts:

  • Collect consent: Managed by trusted intermediaries called Account Aggregators.
  • Enforce consent: Managed downstream by Financial Information Users (like banks or wealth advisors), under ecosystem oversight.

https://sahamati.org.in/what-is-account-aggregator/

At the heart of this design lies a set of principles that make consent Open, Revocable, Granular, Auditable, Notifying, and Secure or ORGANS for short.

The Account Aggregator (AA) framework became the first manifestation of DEPA — the Data Empowerment and Protection Architecture. It is now India’s go-to model for user-consented data sharing between institutions, especially for straightforward data transfers and simple inference tasks.

Consent works well for inferences — one-time decisions like a bank checking your last six months of transactions to approve a loan. Yet, in practice, consent has well-known limits. People are asked to grant permission repeatedly, often through long, opaque terms they don’t fully understand, leading to consent fatigue and a loss of meaningful control.

These limitations become clearer when we move from individual decisions to model training and large-scale analytics, where algorithms learn patterns from millions of records. Seeking or managing consent at that scale is neither practical nor effective. 

What’s worse is that models can sometimes memorize sensitive data and inadvertently reveal it later. This highlights the need for new, complementary control primitives that uphold privacy and accountability even when explicit consent isn’t feasible.

Attempts at de-identification — the process of removing or masking identifiers to anonymize data – have significant limitations in practice. Although anonymization is meant to ensure that individuals cannot be re-identified, de-identification techniques are often reversible when datasets are combined with external information. As a result, such approaches offer only weak privacy guarantees, and numerous cases have shown how easily supposedly “anonymous” data can be linked back to individuals.

Privacy-preserving Algorithms: A New Control Primitive for Training and Analytics

To address these limits, a new class of algorithms has emerged under the broad umbrella of Privacy-Enhancing Technologies (PETs). Let us call these privacy-preserving algorithms, to differentiate them from other classes of PETs. They provide a spectrum of technical safeguards that preserve privacy while still enabling useful computation and collaboration on sensitive data.

Among these, Differential Privacy (DP), a mathematical framework for preserving individual privacy in datasets, stands out as a powerful privacy primitive for model training and data analysis.

The key idea: DP adds carefully calibrated noise to queries or model updates so that the results are statistically indistinguishable whether or not any single individual’s data is included. This ensures that nothing specific about an individual can be reliably inferred.

To make this guarantee rigorous, DP introduces the concept of a privacy budget (often represented by the parameters epsilon ε and delta δ):

  • Each query or training step “spends” some of this budget.
  • With more queries or training epochs, the cumulative privacy loss increases.
  • Once the budget is exhausted, no further queries or training is allowed, keeping the risk of re-identification mathematically bounded.

Think of this as a quantitative accounting system for privacy loss. Note, however, that DP comes with a utility tradeoff: adding calibrated noise can reduce model accuracy or data usefulness. Hence, depending on the use-case, the right privacy controls may be achieved through other privacy-preserving algorithms, or a combination thereof.

Electronic Contracts: Digitizing Trust

While privacy-preserving computation enables data to be used securely, participants still need clear agreements defining who may use it, for what purpose, or under what conditions. For such collaborations to function effectively, there must be a well-defined and enforceable contractual framework that specifies each party’s rights, obligations, and permissions.

The need for such a framework becomes even more pressing as organizations seek to unlock real value from data. No single dataset is enough; the most meaningful insights arise when information from multiple sources — hospitals, banks, labs, startups, or agencies — can be combined and analyzed responsibly. Yet each participant brings its own rules, contracts, and compliance obligations, creating a patchwork of agreements that are difficult to align.

Traditionally, contracts are legal documents — PDFs or paper agreements — written in human language, interpreted by lawyers, and enforced by institutions. They work well when a few parties are involved, but in modern data collaborations, this model quickly breaks down.

Today, every new collaboration means drafting, signing, and managing a maze of separate legal agreements, often in different formats, scattered across systems, and maintained by hand. With every participant added, the web of contracts grows bulkier, making coordination slow, expensive and error-prone. Every change or dispute requires human intervention and can take weeks or months to resolve.

This contractual friction has long been the viscous drag holding back scalable, compliant data collaboration. Not because trust is missing, but because it is buried under paperwork.

Electronic contracts transform this equation. They are machine-readable, digitally signed, and executable agreements that translate legal promises into enforceable code. Instead of being static documents, they are active digital objects that the DEPA orchestration layer can interpret and act upon — automatically initiating workflows, enforcing permissions, and ensuring compliance.

In effect, electronic contracts bridge law and computation.  They enable trust, automation, and accountability at digital speed, replacing manual paperwork with a system that can verify, execute, and audit commitments in real time.

Confidential Clean Rooms (CCR)

To operationalize the above elements, we need infrastructure that embeds privacy and compliance mechanisms by design, while also supporting diverse collaboration modalities — from data analytics and model training to various forms of inference.

That’s where Confidential Clean Rooms (CCRs) come in. A CCR is a secure computing environment that allows organizations to collaborate on data without ever sharing it in plain form. You can think of it as a locked, monitored laboratory where data from multiple parties can be brought together for analysis — yet no participant, not even the operator of the lab, can peek inside.

At the heart of every CCR is Confidential Computing — a technology that uses Trusted Execution Environments (TEEs) built into modern processors.  When data enters a TEE, it is encrypted and isolated from the rest of the system, ensuring that even cloud providers or system administrators cannot access it. Computations run inside this protected enclave, and only verified results can leave. Each TEE also produces a cryptographic attestation, a proof that the computation was executed correctly and under the agreed conditions.

https://depa.world/training/architecture

On their own, CCRs provide secure execution. But when combined with other DEPA primitives..

  1. Electronic Contracts, which specify who can use what data for what purpose, and
  2. Privacy-preserving algorithms, which provide mathematical controls about what information can or cannot leak,

..they form a complete privacy-preserving data-sharing stack.

In essence, Confidential Clean Rooms (CCRs) enable confidential, techno-legal, and privacy-preserving computation on data. They make it possible to conduct large-scale data inference, analytics and modelling responsibly, without transferring raw data to any third party, and thereby eliminating the need for consent specifically for data sharing.

But technology alone doesn’t build ecosystems. Who brings this framework to life, abstracting away its complexity for everyday organizations? How might it help us confront our most urgent global challenges — in health, climate and finance? And how could it unlock entirely new kinds of enterprises, fueling a vibrant and responsible data economy for the Intelligence Age?

Data Collabs!

Read Part 2: Privacy in the Age of AI: New Frameworks for Data Collaboration-Part-2

Please note: The blog post is authored by our volunteers, Hari Subramanian and Sarang Galada

For more information, please visit: https://depa.world/

FAQs and Facts on Techno-Legal Regulation 2.0

This blog continues our discussion on the techno-legal regulation of artificial intelligence (AI), building on our original post from 03.09.25—with a focus on key outstanding issues that required in-depth consideration, alongside the responses and questions we received from stakeholders as of 12.09.25.

Question 1: Since technology is constantly evolving, wouldn’t relying on technology to enable regulation be a flawed approach?

No—what would be flawed is mandating the use of specific technologies for regulation. In fast-evolving domains like AI, rigid technological mandates risk becoming obsolete within a short time—both stifling innovation and undermining public safety. A fundamental insight from systems theory reinforces this: to regulate or control a system that operates at speed x, the regulatory system itself must react and adapt at comparable or greater speed.

AI is evolving at breakneck speed and our understanding of the associated risks and failure pathways remains incomplete. This inherent uncertainty calls for a regulatory framework that is both flexible and adaptive. The most effective way to achieve this is by combining technological agility with failure-related metrics, all governed under lightweight legal constraints and conditions. The techno-legal approach is designed precisely for this: it sets clear outcome-focused obligations for system developers and operators, without prescribing rigid technical solutions, while promoting continuous system monitoring and adaptability to emerging risks.

For example, instead of mandating a particular technique for privacy preservation in AI training, policymakers under the techno-legal approach mandate only the regulatory outcome—i.e., privacy preservation—allowing developers to implement the latest techniques, such as differential privacy or federated learning, to achieve it. As a result, regulation remains effective and adaptive in the face of advancing technology and emerging risks.

Question 2: Isn’t a techno-legal approach most suitable when the subject of regulation is clearly defined? If so, doesn’t AI’s rapidly evolving and non-deterministic nature make it a poor candidate for such regulation?

A precise definition of the regulatory subject is essential for traditional command-and-control regulation. This model relies on ex ante identification and enumeration of risks and corresponding mitigation measures, typically framed as detailed, positive obligations that regulatees must follow. Without a clear regulatory subject, risk assessments can be inaccurate, leading to over-regulation in some areas and under-regulation in others. Given AI’s rapidly evolving and non-deterministic nature, it is ill-suited for such rigid regulation.

In contrast, a techno-legal approach focuses on defining the regulatory outcome, rather than the precise subject of regulation. The regulator requires that the outcome—such as privacy preservation in AI training—be embedded into the technical design of any system that could affect it, without prescribing specific methods to achieve compliance. This removes the need for exhaustive risk enumeration upfront and avoids the pitfalls of narrowly defining the regulatory subject. By focusing on outcomes rather than rigid processes, techno-legal regulation enables continuous adaptability, making it uniquely well-suited to govern AI systems that are non-deterministic and continuously evolving in capability and complexity.

For example, Musical AI’s Rights Management Platform is a techno-legal solution that embeds the regulatory objective of copyright protection directly into the AI model development process. The platform achieves this by restricting training of music generation models to licensed content and integrating attribution technology that logs each output, linking it to the original artist or song. This ensures seamless copyright enforcement and fair revenue sharing. Crucially, the focus remains exclusively on the outcome, i.e.—safeguarding creators’ exclusive rights over the use and distribution of their works, as mandated by copyright laws globally. For such a techno-legal solution to function, the regulator need not define specific AI model types for music generation as the regulatory subject, nor prescribe a particular rights management platform as a compliance mandate. Instead, technologists and companies remain free to innovate in AI music generation, applying any method or architecture they choose—as long as the regulatory outcome of effective copyright protection is achieved.

Question 3: How can techno-legal regulation be designed to avoid becoming redundant or leading to unintended or undesirable consequences?

Techno-legal approaches are intended to tackle the very problem of redundancy in AI regulation, setting clear, outcome-focused obligations for system developers and operators while enabling continuous monitoring and adaptability to emerging risks (as explained in response to Question 1 above).

That said, in addition to having clearly defined regulatory outcomes, techno-legal regulation depends on two key conditions to remain effective and adaptive, ensuring it does not ironically render itself redundant. First, the efficacy of any techno-legal solution must be assessed using well-defined metrics to track its progress toward the regulatory objective. Where direct measurement is impractical, appropriate proxy indicators can be used. Importantly, these metrics should be subject to regular review, ensuring they stay relevant and responsive to emerging externalities and shifts in the operating environment. Second, the techno-legal solution should undergo regular audits to verify its effectiveness and continued alignment with the regulatory objective. This ensures that the system continues to function as intended. When designed with clear objectives, measurable metrics, and periodic auditing—techno-legal regulation remains robust, avoiding potential redundancy and the risk of unintended or undesirable consequences.

Question 4: Wouldn’t the AI Chain architecture under DEPA 2.0 restrict the diversity of relationships in the value chain, thereby limiting novel pathways for innovation?

On the contrary, the AI Chain architecture is specifically designed to enable the broadest diversity of relationships in the AI value chain. Its open, modular design and transparent accountability mechanisms allow various actors—including developers, data providers, service operators, and others—to collaborate with trust and innovate without rigid barriers. This flexibility, in turn, fosters the emergence of novel and unexpected pathways for value creation.

Question 5: Can the allocation of liability—an inherently nuanced area of jurisprudence that has evolved over centuries—be effectively codified into a technology framework?

The allocation of liability, grounded in centuries of jurisprudence, becomes particularly complex when applied to AI. While techno-legal approaches may not be suited to directly assign liability and enforce penalties for AI harms on their own, they could certainly provide valuable tools to help navigate this complexity. For example, the AI Chain architecture under DEPA 2.0 leverages distributed ledger technology to provide end-to-end tracking of system activities and participant actions at a fine-grained level—capturing who performed which action, when, and using which model or dataset, with precise timestamps. Cryptographic proofs such as Merkle trees ensure that every step is irrefutably recorded and immutable. These detailed traces create a tamper-proof, transparent record of events, which auditors, courts, and regulators can use to reconstruct the sequence of actions leading to an AI-related harm.

The technological observability and causal traceability enabled by the architecture could incentivise good behaviour among ecosystem actors, reduce ambiguity in legal and adjudicatory processes, and support the development of robust AI liability jurisprudence—making liability allocation for AI harms streamlined, scalable, transparent, and fair.

We welcome feedback and suggestions from all stakeholders at [email protected]

Please note: The blog post is authored by Raj Shekhar, with inputs from Sunu Engineer and review by Subodh Sharma, all volunteers with iSPIRT.

FAQs and Facts on Techno-Legal Regulation

This blog is an invitation to advance public discourse on techno-legal regulation of artificial intelligence (AI). It builds on an article by Rahul Matthan (15 January 2025), in which he raised reservations about applying techno-legal regulation to AI governance and expressed concerns about the practicability of techno-legal artefacts-particularly their ability to establish liability chains among ecosystem actors-as a tool for enforcing good behaviour and ensuring accountability for AI harms. Through a Q&A format, this blog addresses those reservations and concerns directly, while explaining why techno-legal regulation is not only feasible but also the only practicable and scalable way to regulate AI effectively

Techno-legal regulation isn’t a monolithic concept, it can assume multiple implementations for different problems. DEPA Training embeds privacy and sovereignty requirements directly into AI training pipelines through confidential clean rooms and differential privacy. DEPA Inference creates consent-based data sharing. The proposed AI Chain architecture would establish liability tracking through distributed ledgers. Each solves a different problem using the same core principle: making regulatory compliance systematically enforced rather than legally suggested.

The confusion arises because people conflate these distinct systems. DEPA Training ensures AI models can do data collaboration. Privacy budgets will prevent individual contributions from being traced. DEPA Inference ensures PII based data can’t be accessed without consent because the cryptographic handshake fails without a valid consent artifact. AI Chain would ensure accountability can’t be avoided because every inference generates a log trace. Three different problems, three different techno-legal solutions, one underlying philosophy: architecture enforces what law requires.

Moreover, tools don’t meet the bar of techno-legal: that is precisely why one would want to craft techno-legal docs to accept technology substrates as keys ideas which are accepted and acknowledged as such to be mechanisable to meet certain key properties and invariants (in the real world). Tools are just instances of realising these mechanisable properties/invariants. For instance — can policy be put as attestable and executable code — why not? Policy is a set of unambiguous rules and so long as they are unambiguous and computable, they are automatable. If exceptions to the rule exist then they must also be documented.

There is a general worry that introducing identities into AI systems will erode privacy. From a computer-systems standpoint, that conclusion doesn’t follow. What matters is how identifiers are created and managed and what is recorded. With pairwise (service-scoped) identifiers, selective disclosure, and tamper-evident logging of metadata (not payloads), systems can offer accountability and simultaneously uphold Privacy by Design (PbD). These are not speculative ideas: the web and major identity programs already run variants at scale.

OpenID Connect has long supported pairwise subject identifiers, which purposely give each relying party a different, opaque value, curbing cross-service linkability. Aadhaar’s Virtual ID (VID) and UID tokenization make the same design choice in India: a revocable, tokenized identifier is presented instead of the Aadhaar number, and per-agency tokens prevent easy correlation across services while remaining auditable. In both cases, the principle is the same—identity is scoped to a context.

On the web, the W3C Verifiable Credentials (VC) 2.0 model and cryptographic suites such as BBS+ allow a holder to prove only the claims that are necessary (for example, “over 18”) while withholding the rest; the SD-JWT work in the IETF ecosystem supports similar selective-disclosure for JWTs (JSON Web Tokens). The direction of travel — both in standards and deployments — is to treat “need-to-know” as a first-class property.

Every time a browser trusts a public TLS certificate, it relies on Certificate Transparency (CT) — append-only Merkle-tree logs with efficient inclusion and consistency proofs—to keep Certificate Authorities honest. Chrome and Apple have required CT for certificates issued after 2018. Therein lies a lesson for AI: append-only, publicly auditable logs are one mature way to record event receipts without exposing content.

PbD’s “positive-sum” stance is compatible with a metadata-only accountability layer. Instead of retaining prompts, outputs, or personal payloads, systems can emit signed, append-only receipts that capture who/what/which/when: a scoped user identifier, model and dataset versions, operation type (e.g., generate/transform/moderate), timestamp, and the responsible (but not necessarily trusted) operator or process. Auditors later verify that events occurred and in which order via Merkle proofs; when a lawful process requires more detail, selective-disclosure credentials release the minimum necessary information. This is the same architectural separation that keeps web PKI and identity wallets both auditable and privacy-preserving.

When we track things securely, we do not create a surveillance state. We create a modelable, measurable, manageable state. When the tracking data is misused by parties – parties in power or parties with power to access the data, bypassing access checks – then they have the ability to create a surveillance state or cause damage. DEPA liability chains are designed to establish the connections between different parts of the data economy ecosystem, but using strong cryptographic techniques to detect and protect against unauthorised access.

Traceability and agency/activity chains are needed to construct the data economy ecosystem robustly.

India needs techno-legal regulation because we can’t afford not to have it. We don’t have thousands of judges to adjudicate AI harm. We don’t have armies of auditors to verify compliance. We have scale challenges the West doesn’t face, governing AI for 1.4 billion people requires architectural enforcement. We need to protect our people and enable our innovators.

The question isn’t whether we need techno-legal regulation, it’s whether we’re honest about what happens without it. Without DEPA Training’s cryptographic enforcement, AI systems will train on unauthorized data because detection is impossible at scale. Without immutable audit trails, companies will claim compliance while violating every principle because verification requires resources we don’t have. Without architectural enforcement, the most vulnerable Indians, those who can’t afford lawyers, don’t understand technology, can’t navigate bureaucracy, will be harmed first and most.

AI space is an unknown space. To define legal regulation in a space we need to be able to enumerate ( exhaustively if possible) all the failure modes in the system and then frame the regulations to prevent, to detect, to curtail impact, to correct after the event etc. When we know the details one can compute the legal implications and consequences and define a legal regulation ( 80 percent) supported by technology ( 20 percent). When we are dealing with an unknown space, unknown in the sense that the failure modes are not enumerable, then we can do techno-legal regulation in an evolutionary manner ( even more so when the activity is distributed in space and time and occurring with a high frequency ). Here we start with a base implementation and evolve it based on the discovery of failure modes. We can argue that such an evolutionary approach to creating regulation that not only protects but also fosters growth needs to be implemented on a technology substrate (80 percent tech 20 percent human). Otherwise the evolution will be very slow and the regulation will be out of sync with market needs.

True, current technologies may not be able to solve use limitation and/or data minimisation in the world of AI ex-ante, however, the question should be can we construct testable tech mechanisms to check violations of these requirements ex-post. I believe that is certainly possible — challenging but doable.

DEPA does solve for this indirectly. Retention restrictions, usage limitation, data minimization, all require deep understanding of how and where data is being used. DEPA chains track and trace and provide this information which will enable the DEPA framework itself to implement and enforce these and other constraints and conditions on data use. Without a technology framework to do this, it is likely that there will be many more violations of these kinds of conditions without coming to light. The more complex the regulations get, the more technologically advanced and evolutionary the substrate needs to be.

We’re not encoding Platonic ideals of fairness, we’re implementing specific, measurable requirements that regulators and courts have already defined. DEPA Training’s architecture can use techno-legal solutions to enforce fairness principles, it may work like this: when a dataset enters the clean room, the system automatically computes demographic distributions and compares them against regulatory baselines. If biases are detected appropriate remedial measures are effected.

We welcome feedback and suggestions from all stakeholders at [email protected]

Please note: The blog post is authored by our volunteers, Sunu Engineer, Subodh Sharma, Raj Shekhar and Harshit Kacholiya

Lessons from India’s Digital Public Infrastructure Journey

In just a decade, India has redefined how nations can harness technology for the public good. Through Digital Public Infrastructure (DPI), such as Aadhaar, UPI, and Account Aggregator, followed by newer innovations like OCEN and ONDC, India has shown the world how open, interoperable, and inclusive digital systems when designed as privately provisioned, public infrastructure; can spark innovation, scale rapidly, and empower communities at the grassroots.

To capture these lessons and provide a practical guide for policymakers, technologists, and global stakeholders, iSPIRT Foundation has contributed to the development of the DPI Handbook: Foundations of Digital Public Infrastructure. This handbook distills a decade of India’s pioneering experience into actionable insights, frameworks, and design principles that can help other nations build their own inclusive and interoperable DPI. The paper is now published on the Research and Information System for Developing Countries (RIS)

This Handbook is not the product of a single author, but rather the culmination of years of dedicated volunteerism at iSPIRT, where technologists, policymakers, entrepreneurs, and thinkers came together to exchange ideas, build prototypes, and debate design choices. Each page reflects this collaborative spirit, proof that when diverse minds work in concert, they can create frameworks that transform entire societies.

We extend our deepest gratitude to the iSPIRT volunteer community, past and present, whose passion and commitment, have been instrumental in shaping India’s DPI journey. Their contributions embody the ethos of building digital public infrastructure as a shared national mission.

We hope the DPI Handbook becomes both a guide and an inspiration, for nations building their own digital public infrastructure, and for all who believe that technology, when designed for the public good, can change the course of societies.

Please note: The blog post is co-authored by our volunteer, Arun Iyer

iSPIRT would like to extend its gratitude to Shri. Rajeev Chawla – IAS, Strategic Advisor and Chief Knowledge Officer- Ministry of Agriculture & Farmer’s Welfare who co-authored this Handbook, for his insightful perspectives. We would also like to thank Shri. Sachin Chaturvedi, Director General – RIS for graciously writing the Preface to this Handbook

Empowering India’s Growth Engines: The Critical Role of Credit for MSMEs

June 27 – International MSME Day

Micro, Small, and Medium Enterprises (MSMEs) are the unsung heroes of India’s economy. Employing over 11 crore people and contributing nearly 30% to the country’s GDP, MSMEs are not just businesses – they are drivers of innovation, inclusion, and local development.

On this MSME Day, we celebrate their resilience and ingenuity. But it’s also a moment to reflect on what holds them back – and how access to credit remains one of the most critical challenges they face. For MSMEs, timely and adequate credit is often the difference between scaling up and shutting down. Credit powers, yet, the reality is stark: a majority of MSMEs in India still rely on informal sources of finance or are denied loans due to lack of collateral or formal credit history.

According to estimates by the IFC, India’s formal MSME credit gap exceeds ₹25 lakh crore. Despite government schemes and fintech innovations, many small businesses struggle to access formal credit. This gap doesn’t just hurt MSMEs – it stifles job creation, reduces GDP growth, and hampers economic inclusivity.

A Shift Towards Cash-Flow-Based Lending

The good news? The ecosystem is evolving.

With initiatives like Account Aggregator, OCEN (Open Credit Enablement Network), and digitization of GST and banking data, lenders are moving towards cash-flow-based lending models. These innovations focus on real-time business performance rather than outdated collateral-based methods.

Such models enable more flexible, faster, and inclusive credit access to deserving MSMEs, especially those in Tier 2 and 3 cities.

To truly empower MSMEs with credit, the following steps are critical:

  • Financial literacy programs to help MSMEs manage credit and build a borrowing track record.
  • Policy support to incentivize banks and NBFCs for lending to first-time or underserved borrowers.
  • Greater public-private collaboration to build robust digital lending infrastructure.
  • Simplification of loan application processes through digital channels.

Celebrating MSMEs, Supporting Their Dreams

On this MSME Day, let’s go beyond celebration. Let’s reaffirm our commitment to unlocking finance for the backbone of our economy.

Whether you’re a policymaker, lender, fintech innovator, or simply a consumer – supporting MSMEs means supporting India’s future.

Because when MSMEs grow, India grows.

For more information, please visit: http://ocen.dev

Please note: The blog post is authored by our volunteer, Rahul Bhaik

OCEN Ecosystem Progress Snapshot:

The Open Credit Enablement Network (OCEN) post its initial pilot deployment and continuous upgrades and improvements has seen growing participation across the ecosystem. OCEN’s transition from its early stages into the growth phase is now reflecting in its growing volumes and addition of new ecosystem partners. 

Monthly Progress Report: 

With the start of the new financial year, the April-June quarter is usually considered a sluggish season in financial services, more particularly in lending business. Corresponding trend reflects in the April performance on the OCEN traction as well, however April & May are still looking progressive in comparison to Jan & Feb performance, considering March financial year end rush as an exception. As newer products and lenders go live on OCEN, the trendline growth looks promising. 

Here’s a quick look at the latest numbers on the OCEN ecosystem:

MetricJan-25Feb-25Mar-25Apr-25May-25
No. of Lenders Live on OCEN77788
No. of Borrower Agents Live66666
No. of Technology Service Providers (TSPs) with active deployment 23333
No. of Loan Products1111111212
No. of Loans Disbursed8951567317938614552
Disbursement Amount₹25.17 Crore₹33.67 Crore₹139.11 Crore₹76.13 Crore₹90.82 Crore
Average Loan Ticket Size₹2.81 Lakh₹2.14 Lakh₹4.37 Lakh₹1.97 Lakh₹1.99 Lakh

OCEN continues to engage with ecosystem partners to build the momentum for new cash flow lending products for MSMEs.

For more information, please visit: http://ocen.dev

Please note: The blog post is authored by our volunteer, Rahul Bhaik

OCEN: Credit Access for MSMEs continues to grow

The volumes and traction on Open Credit Enablement Network (OCEN) continues to grow month on month. The growing trajectory highlights OCEN’s ability to streamline and democratise credit access for MSMEs by leveraging digital public infrastructure and fostering collaboration among lenders, agents, and technology providers. 

Here is a snapshot of the OCEN ecosystem’s key updates for March:

MetricJan-25Feb-25Mar-25
No. of Lenders Live on OCEN777
No. of Borrower Agents Live666
No. of Technology Service Providers (TSPs) with active deployment 233
No. of Loan Products111111
No. of Loans Disbursed89515673179
Disbursement Amount₹25.17 Crore₹33.67 Crore₹139.11 Crore
Average Loan Ticket Size₹2.81 Lakh₹2.14 Lakh₹4.37 Lakh

As OCEN continues to evolve, it is poised to further bridge the credit gap for MSMEs, enabling faster, more transparent, and more inclusive financial support for this vital sector of the Indian economy.

For more information, please visit: http://ocen.dev

Please note: The blog post is authored by our volunteer, Rahul Bhaik

OCEN: Advancing Digital Public Infrastructure for MSME Credit Access

The Open Credit Enablement Network (OCEN) is steadily progressing from its early stages into a more robust growth phase. With its current ecosystem participants, OCEN has started facilitating smoother credit delivery to MSMEs. At the same time, numerous other players are integrating into the protocol and developing specialized loan offerings tailored to the needs of MSMEs.

Here’s a snapshot of the OCEN ecosystem’s key updates for February:

MetricJan 2025Feb 2025
No. of Lenders Live on OCEN77
No. of Borrower Agents Live66
No. of Technology Service Providers (TSPs) with active deployment 23
No. of Loan Products1111
No. of Loans Disbursed8951567
Disbursement Amount₹25.17 Crore₹33.67 Crore
Average Loan Ticket Size₹2.81 Lakh₹2.14 Lakh

OCEN continues to engage with new participants to further expand the ecosystem, adding new products and scaling up efforts to transform credit access for MSMEs on a large scale.

For more information, please visit: http://ocen.dev

Please note: The blog post is authored by our volunteer, Rahul Bhaik

Imagining Indian Cities: #1 ‘Creative Bangalore’

There is a need for a continuous conversation about the best way to shape the future of Indian cities. This conversation will take place across multiple cities, with learnings from each other.

Therefore, it is proposed to hold an ‘Imagining Indian Cities’ Workshop annually in different Indian places. The first of those took place in Bangalore from 10 to 15 March 2025, with the next two planned for Chennai and Pune.

These Workshops will gather academics, practitioners and urban innovators in multi-day get-together. Half of each conference will focus on the host city, and the other half will be for learnings from elsewhere.

This ‘Creative Bangalore’ Workshop was organised by the Indian think tank iSPIRT Foundation and supported by IISc/IUDX, IIHS and Dassault Systèmes, for bringing together various participants, chosen to form a sustainable collective capable of shedding light on a certain number of key questions and moving towards increasingly measurable contributions.
Initial key questions were:

  1. Genericity and reproducibility of the Creative Cities model developed by Patrick Cohendet
  2. Digital Urban Data, Digital Public Infrastructure (DPI) and territorial intelligence
  3. Placement of (Digital) Commons
  4. Digital representations of culture
  5. Digital representations of Wicked Problems

Results:

  • 5 full days, hosted by Bangalore International Centre (D1), IISc/India Urban Data Exchange (D2), Sabha (D3), Indian Institute of Human Settlement (D4) and Dassault Systèmes (D5);
  • More than 50 speakers, in person or online;
  • A rich repository of content, including speaker presentations, slides and Photos https://drive.google.com/drive/folders/1lxnbuhna3Hz_t49BOGvZG15_0dByi9Ki

OCEN: Enabling Credit for MSMEs with Digital Public Infrastructure

The Open Credit Enablement Network (OCEN) built on open network principles, unbundles MSME lending into specialized components, creating an ecosystem where different entities excel in one specific part of the lending process. These specialized entities focus on various tasks such as sourcing, distribution, identity verification, underwriting, capital arrangement, and collections. The result? A seamless, scalable model for MSME lending, made possible by OCEN 4.0.

Since its initial pilot deployment, the OCEN protocol has undergone continuous upgrades and improvements. Based on invaluable feedback and insight from ecosystem participants, the latest specifications address key challenges such as incentive alignment, dispute resolution, and network settlements through robust techno-legal frameworks. With these improvements, OCEN has already transitioned from its early stages and is now entering the growth phase.

OCEN Ecosystem Progress Snapshot:

We are now starting to publish monthly numbers of the OCEN ecosystem to build a trendline of progress.

As of January 2025, here’s a quick look at the latest numbers on the OCEN ecosystem:

MetricMonth – Jan 2025
No. of Lenders Live on OCEN7
No. of Borrower Agents Live6
No. of Technology Service Providers (TSPs) with active deployment 2
No. of Loan Products11
No. of Loans Disbursed895
Disbursement Amount₹25.17 Crore
Average Loan Ticket Size₹2.81 Lakh

As the OCEN network grows, it is actively engaging with new participants to expand the ecosystem and scale up with additional products. As new products and partnerships are developed, we are excited to witness how OCEN will continue to evolve and transform credit access for MSMEs at scale.

For more information, please visit: http://ocen.dev

Please note: The blog post is authored by our volunteer, Rahul Bhaik

As the AI race across the world heats up, a post: “𝐈𝐧𝐝𝐢𝐚 𝐝𝐨𝐞𝐬𝐧’𝐭 𝐰𝐢𝐬𝐡 𝐭𝐨 𝐛𝐞 𝐣𝐮𝐬𝐭 𝐚 𝐭𝐫𝐚𝐝𝐞 𝐜𝐨𝐥𝐨𝐧𝐲 𝐨𝐟 𝐂𝐡𝐢𝐧𝐚 𝐨𝐫 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 𝐜𝐨𝐥𝐨𝐧𝐲 𝐨𝐟 𝐭𝐡𝐞 𝐔𝐒”

To succeed at AI, we need a whole-of-nation approach involving deep-tech startups, enabling industrial policy and pre-commercial publicly-funded research.

When the Biden Administration released its AI Diffusion Executive Order a few weeks back restricting GPUs to countries, it became clear that having strategic autonomy in AI was of paramount importance to India.

Just being the use-case capital for AI wasn’t the right way to go.

India doesn’t wish to be a trade colony of China or the technology colony of the US.

What makes AI different is that it needs a whole-of-nation approach. To win at AI we need deep-tech startups, enabling industrial policy and pre-commercial publicly-funded research. It is only when all three come together that magic can happen.

Our resistance to the whole-of-nation approach is understandable. After all, our IT Services and SaaS industry came up without the whole-of-nation approach. So, many people thought that the same playbook would apply to AI.

China has proved with DeepSeek’s R1 and Moonshot AI [another Chinese company’s] Kimi k1.5 that a whole-of-nation approach can have big payoffs. In India, this approach has worked for cryogenic engines, 4G/5G telecom equipment and India Stack. We do remarkable things when we set our mind to it!

Yes, we have lost some time due to the use-case captial camp. But all is not lost. The field is still young and many areas like neurosymbolic AI are very much open.

The Biden AI Diffusion order, and Chinese success has given new vigour to the whole-of-nation camp within government, private sector and civil society. The debate is now over: You will see some good developments become visible in the coming months  #AI #StrategicAutonomy

Also see: https://www.moneycontrol.com/technology/deepseek-s-llm-success-triggers-big-debate-is-india-s-hesitation-a-strategic-mistake-article-12921811.html

Comparing Key Frameworks in the Digital Lending ecosystem: ULI, OCEN, and AA

In the evolving landscape of financial inclusion and digital lending, India has introduced several innovative frameworks designed to streamline access to credit, enhance transparency, and create seamless financial ecosystems. Among these, the Unified Lending Interface (ULI), Open Credit Enablement Network (OCEN), and Account Aggregator (AA) stand out as key initiatives aimed at modernizing the way credit and financial data are managed.

While all three initiatives aim to transform the lending sector, each has distinct roles, benefits, and functions. To better understand their unique features and how they interact with one another, we’ve put together a detailed comparison chart.

This side-by-side breakdown helps you identify the core differences between ULI, OCEN, and AA, their respective use cases, and how they collectively contribute to building a more inclusive and tech-driven financial ecosystem in India. Whether you’re a fintech enthusiast, a policy maker, or simply looking to understand the future of credit access, this comparison will offer valuable insights into these transformative frameworks.

 ULI (Unified Lending Interface)OCEN (Open Credit Enablement Network)AA (Account Aggregator)
PurposeStandardized API interface for Lending institutions providing borrower’s financial and non-financial data from various sources, including government databases, and financial institutions. Helps financial institutions to reduce friction for accessing the information needed for quick loan underwriting decisions and efficient loan application processing.  OCEN is a framework of application programming interfaces (APIs) for interaction between lenders, loan agents, collection and disbursement partners, derived data providers, and account aggregators OCEN facilitates flow of credit between borrowers, lenders, and credit distributors using a common set of standards. Various participants in the credit ecosystem can seamlessly connect with one another without needing to build customised APIs and infrastructure. OCEN aims to enable cash-flow based unsecured financing for MSMEs as against balance sheet and collateral based financing. Both ULI and AA can be derived data providers in the OCEN ecosystem.  The Account Aggregator (AA) framework allows users to share consent driven financial data across institutions. Users can access their financial information from multiple institutions in one place, and can decide who can access their data, for how long, and for what purpose. The FI Types are managed by ReBIT.
UsersRegulated entities like Lenders, etcMSME-focused Borrower Agents and Lenders. Other ecosystem participants include Derived Data Providers, Collection Agents, Disbursement Agents and KYC Partners  Financial Information Users which are Regulated entities and Individuals who wish to access their own financial details
Key FunctionalityEnabling RE’s and Marketplaces to fetch different types of financial and non-financial data for underwriting using standard interface.Standard rails to connect various participants in the Cash flow based MSME lending ecosystem. Enabling building customised credit products for MSMEs and empowering the Borrower agent as a lynchpin and a representative of the borrowers.  Providing safe, user-consented sharing of financial information between regulated financial institutions via the Account Aggregator framework. Individuals can have a holistic single source to view financial data across various institutions.  
Data UsageUtilizes borrower data from diverse sources like banks, land records, and financial history.Utilizes specific business data of MSMEs like invoices, transactions, etc., for the credit product creation. Any kind of data can be passed to the Lender in the form of derived data. For eg. Government e-Marketplace shares borrower performance data with lenders post consent.  Uses consolidated financial data like bank accounts, GST, income, etc., from FIPs.
Role in EcosystemStreamline credit access by integrating borrower data from multiple sources for accurate financial assessment, enabling faster loan approvals through advanced analytics, facilitating easy integration with standardized APIs, and providing lenders seamless access to comprehensive borrower information to simplify credit appraisal and reduce documentation  Fosters innovation in MSME credit by enabling tailored loan offerings and faster credit flow thus enabling access to credit to MSMEs which earlier did not have access to the same. Reduce cost of short tenure, low ticket lending and making it viable to give loans to MSMEs which are end use controlled and enable collection control.Promotes financial inclusion by simplifying financial data sharing and improving credit decision-making. Allowing user to share their data directly with financial institutions in a consented manner removing the data passing through multiple hands.
Technology BackboneConsent-based data-sharing infrastructure; APIs to connect various data sources with lenders.    API infrastructure based on standard OCEN protocol for credit enablement. Participant and Product registries to enable discovery and standardisation within the ecosystem.API-driven, centralized consent architecture defined by ReBIT under the RBI framework.
Regulatory FrameworkProposed under RBI’s initiative to enhance digital lending infrastructure.Digital Public Infrastructure at a mass roll out stage. Once formalised will be managed and regulated as advised by regulators.  Governing law is the RBI’s Account Aggregator framework under the NBFC-AA license.
Use CasesA farmer applies for a loan to purchase farm equipment. Lender is able to access land records and other non-financial and financial data through ULI interface to underwrite the loan application.Zomato as a Borrowers agent on behalf of restaurants enrolled on its platform being able to offer custom loan products specifically built for restaurant partners by the participating lenders based on alternate platform data, ability of providing collection control to lenders via cash flow entrapment and acting as a representative of the borrowers instead of agent of lenders. Like Zomato any of platforms or institutions (like FPO) sitting on a Captive database can utilise OCEN to enable lending on their platform benefiting their users.  Individuals being able to share their multiple bank accounts for specific time periods and for specific purposes with the AA framework using consent mechanism.   Businesses being able to share GSTN data to RE’s for loan underwriting purpose using consent mechanism.
Implementation StageProposed platform; Some pilot Implementations for certain data sources have been done. Overall the ULI is in a development phase.    Few pilots – GeM Sahay, GST Sahay, Jan Aushadhi Kendra and Private network have been successfully tested. More implementations are underway at various stages and gaining traction.Well-established under the RBI’s regulatory framework with multiple FIP’s and FIU’s already integrated.

For more information, please visit: http://ocen.dev

Please note: The blog post is authored by our volunteer, Rahul Bhaik

Digital Public Infrastructures

This workshop was organized by the Indian think tank iSPIRT Foundation, French Embassy in India, Consulate General of France in Bangalore, and La French Tech in India, based on the following principles:

  • Gathering high level contributors from India and France: industrials, transdisciplinary academics, diplomats, officials, business founders, think tank members, technology makers;
  • Pushing a Workshop format (not an event, not a round table, not a scientific conference), organizing 3 different days with 3 different viewpoints:
    • Philosophical/epistemological/ human sciences,
    • Economical/techno-legal/social sciences/adoption,
    • Application domains and use cases (Health, Culture, Creative Cities, Agriculture);
  • Targeting recommendations toward the AI Action Summit (Paris, February 2025).

Results:

  • More than 80 speakers, 100 participants in person (in Bangalore or in Paris), 200 participants online;
  • 14 different countries represented all over the world (India, France, Canada, USA, Mexico, Guatemala, Brazil, Germany, Netherland, Italia, Spain, Portugal, Belgium, Thailand);
  • An opening session figuring the Ambassador of India to France H.E. Mr. Jawed Ashraf, the French Digital Affairs Ambassador H.E. Mr. Henri Verdier, the Consul general of France in Bangalore Mr. Marc Lamy;
  • A rich repository of content, including speaker presentations, slides and recordings (https://drive.google.com/drive/folders/1eTDbRgw1g8EOBtXS7uRim9I6gtQRbCZB).

8th Open House Session- Ballon Volunteers

Thank you so much for your patience…

Many of you asked when the volunteer programme would be available to apply, and here we are again. We do have some changes, though, so please pay attention.

This application process will be available only for the next couple of months and close by December 20th, 2024. It’s on a rolling basis, so apply immediately.

Many of you are already familiar with iSPIRT and its activities; this is your chance to join this volunteer movement. So take some time to review the programmes listed and watch the videos, not just the current ones but also the previous ones, to better understand the journey. Also, please read the Playground Coda and the Volunteer Handbook.

Are there tools we can help build to solve privacy issues, or what kind of packets will help get the internet to remote parts of India? Do you have a better solution to some pressing matters discussed in the videos? Then, you need to apply. Some legacy options and some new options are also available.

Is there some project that strikes your fancy, some part that calls out to you, and you know you can do it? To apply, click this link and follow the process.

I am reminding you again that the deadline is December 20th, 2024.

A journey with iSPIRT is also about the journey with yourself.