Responsible AI

Ensuring AI Innovation Benefits Everyone: Public Interest AI for Policymakers

Jun 18, 2025

5 min read

This article is part of Equitech Futures' four part series on Public Interest AI focusing on how funders, policymakers, technologists, and citizens can build a more equitable AI future that prioritizes people over profit. You can download a PDF version of the full report here.

Introduction

As AI systems rapidly reshape society, a critical question looms: who do they serve?

Today’s AI ecosystem is dominated by a handful of commercial actors, reinforcing a narrative that speed, scale, and concentration are inevitable. But this trajectory is not fixed. The infrastructure, governance, and norms we establish now will shape our collective future for decades to come.

Just as industrial-era reformers reimagined factories as engines of shared prosperity, we face a similar inflection point with AI. The technology is too consequential to be guided solely by market logic and quarterly earnings. It demands bold institutional innovation grounded in public values.

Across the globe, a quiet revolution is underway. Technologists, policymakers, and communities are asking not just what AI can do, but what it should do—and for whom. Their work proves that alternative, more equitable AI futures are not only imaginable, but already in motion.

This report maps the emerging field of Public Interest AI: systems designed to serve collective needs over private gain. It is a call to action for funders, governments, practitioners, and citizens to invest in the infrastructure, talent, and governance that will ensure AI strengthens democracy, advances equity, and serves the many—not just the few.

What’s AI in the Public Interest?

Before embarking on our journey its helpful to establish what we mean by public interest AI.

Public Interest AI is not defined by sector or tool but by intention and values. Unlike commercial AI development—which begins with product-market fit—Public Interest AI begins with the question: what does a just and flourishing society need?

Funders we spoke with emphasized the need to reframe their role: not just funding innovation, but shaping its direction. One described this shift as moving from "investing in outcomes" to "investing in the conditions that make good outcomes possible."

This shift requires moving away from extractive models and toward relational, stewarded ecosystems. It also means redefining success metrics—not just reach or revenue, but accountability, community trust, and context-appropriate impact.

Understanding the Moment as a Policymaker

Public Interest AI requires governments to do more than regulate. It requires leadership in defining public values, resourcing alternatives to private control, and enabling communities to shape the systems they live within. We take inspiration from the Public Interest Law movement, which emerged in the United States in the 1960s as a response to civil rights struggles, environmental crises, and growing corporate power. Legal scholars and advocates sought to rebalance the scales—ensuring that law served the public good, not just private interests.

Today, we face a parallel challenge. As AI systems increasingly mediate access to opportunity, information, and public services, policymakers have a responsibility not only to mitigate harm but to proactively shape the trajectory of technological development. Just as public interest lawyers carved out space for civil rights, environmental protection, and consumer advocacy, today’s leaders must carve out a role for justice, equity, and collective oversight in the digital realm.

In this moment, regulation alone is insufficient. Governments must act as stewards of the public imagination—setting standards, funding infrastructure, protecting rights, and ensuring that AI works for the many, not just the few. The risks of inaction are too great to leave the future of AI solely in the hands of private enterprise.

Challenges for Policymakers

Our analysis revealed three intersecting challenges that policymakers must navigate:

  1. Regulating AI without stifling beneficial innovation: Regulation is often framed narrowly as a list of restrictions—what AI companies can and cannot do. But this limited view misses a critical opportunity. AI regulation should be a tool not only for risk mitigation, but for futures design: shaping the kinds of technologies we want to flourish. In a public interest framework, this means using regulation to embed values like transparency, equity, accountability, participation, contextual responsiveness, and collective benefit into the foundations of AI development.Rather than simply reacting to harms after they've occurred, regulators can proactively set the terms of innovation—encouraging socially valuable use cases and disincentivizing extractive or harmful ones. This includes regulating areas too often excluded from narrow "AI risk" definitions, such as:
    • Environmental Costs: Training large AI models like GPT-3 can emit over 500 tons of CO₂ and consume hundreds of thousands of liters of water—raising urgent concerns about AI’s carbon and water footprint. Without intervention, global data center energy use could exceed 1,000 TWh annually by 2026, rivaling national consumption levels.
    • Consumer Protections: AI systems can mislead consumers or make harmful decisions, from biased credit scoring to misleading chatbots.
    • Data Privacy and Ownership: Companies have trained models using scraped personal data without consent, triggering lawsuits and public backlash. Communities need enforceable rights to data ownership, transparency, and redress—especially as AI becomes embedded in public infrastructure.
    • Labor Market Disruptions: AI threatens to automate up to 50% of entry-level white-collar jobs in the next five years, especially in customer service, legal, and finance. Without proactive, labor-forward policy, this shift risks deepening precarity and inequality.
  2. Steering AI towards equity, trust, and accountability: Good governance is not neutral—it protects the public from harm and sets the terms under which innovation can serve the common good. For AI, this starts with clear expectations: systems must be understandable, challengeable, and accountable. Opaque models—especially in high-stakes domains like healthcare, education, or criminal justice—erode agency and trust. Policymakers must ensure AI does not sidestep civil and consumer protections, including the rights to consent, non-discrimination, and redress. The misuse of generative AI—for instance, in health misinformation or deepfakes—has already prompted global regulatory responses. But trust cannot be outsourced to corporate ethics; it must be structured into oversight.
  3. Producing, not just regulating technology: Governments have largely played a reactive role in AI—regulating or procuring technologies developed in the private sector. But AI’s scale and societal impact demand a stronger public role in shaping its development. Without public infrastructure, mission-driven R&D, and workforce investment, the direction of AI will continue to be set by commercial priorities. Public Interest AI requires not just rules for others, but public capacity to build and steer the future of the technology itself.

Recommendations for Policymakers

To meet the moment, policymakers must do more than mitigate harm—they must shape the conditions for AI to serve the public good. These four priorities offer concrete levers to respond to the challenges ahead:

  1. Build Public Infrastructure for AI: AI’s foundations—compute power, data, and model access—are currently controlled by a handful of firms. That concentration of power constrains who gets to experiment, who can audit, and who benefits. A public-interest approach must reverse this trend through public investment in shared infrastructure. Governments can take tangible steps now:
    • Fund public compute initiatives like CalCompute and EmpireAI to ensure mission-driven actors are not priced out of experimentation.
    • Support open-source model development with transparent training data, cooperative licenses, and democratic oversight.
    • Create and steward data trusts and civic datasets that reflect local contexts, uphold privacy, and are governed by communities—not just scraped from them.
    • Invest in public-sector AI talent through fellowships, R&D labs, and career pathways that make it viable to build for the public good.
  2. Coordinate Globally to Set Norms and Prevent Arbitrage: Public Interest AI cannot flourish in isolation. To prevent regulatory arbitrage and ensure that democratic values shape the global trajectory of AI, governments must coordinate across borders. A key foundation is the OECD AI Principles, adopted in 2019 by over 40 countries, which set out five guiding commitments: inclusive growth and well-being, human-centered values, transparency and explainability, robustness and safety, and accountability. These principles have already shaped national strategies and legislative efforts such as the EU AI Act.
  3. Enforce Robust Consumer and Civil Protections: As AI reshapes how people access services, make decisions, and navigate the world, strong protections are essential to prevent harm—especially for marginalized communities. Consumer and civil rights must be upheld, modernized, and enforced in the age of automation. This includes:
    • Requiring transparency when AI is used and ensuring people have the right to opt out, appeal, or contest AI-generated decisions.
    • Enforcing data ownership consent, and portability laws that protect people from having their personal information used without permission.
    • Prohibiting discriminatory or deceptive AI practices especially in financial services, housing, healthcare, and employment.
    • Mandating accessible pathways to redress for people affected by AI decisions particularly in public systems.

Public trust cannot rest on corporate promises—it must be backed by enforceable rights.

Case Study: Rwanda’s Drone-Enabled Blood Delivery System

When Rwanda set out to improve health outcomes in its rural communities, it faced a familiar infrastructure challenge: distance. Blood supplies often expired before reaching remote clinics, and emergency transfusions were delayed—sometimes fatally—because delivery systems couldn’t move quickly enough. The result was both tragic and costly: high maternal mortality rates and massive blood waste.

In response, the Rwandan government partnered with Zipline, a drone logistics company, to pilot a national blood delivery program using unmanned aerial vehicles. The system bypassed road delays, launched from centralized depots, and dropped medical supplies within minutes. But the government did more than buy a tech service—they shaped it. Zipline’s operations were integrated into the public health system, and the Rwandan government established oversight mechanisms to ensure safety, equity, and data sharing.

The results were transformative. Between 2017 and 2019, blood waste dropped by 67%, and access to emergency transfusions increased dramatically. Beyond logistics, Rwanda demonstrated how public institutions can lead on tech-enabled services—not just contract them out. Their model of integration, accountability, and community benefit offers a roadmap for public interest AI deployment more broadly.

Learn more about the role of funders, technologists, and citizens in shaping the Public Interest AI movement.

Button Text

Written by

Abhilash Mishra

Abhilash Mishra

Founder and Chief Science Officer

Equitech Futures

Abhilash Mishra

Linda Kinning

Linda Kinning

Director of Equitech Ventures

Equitech Futures

Linda Kinning