Responsible AI

Catalyzing the Future We Want: Public Interest AI for Funders

Jun 18, 2025

5 min read

This article is part of Equitech Futures' four part series on Public Interest AI focusing on how funders, policymakers, technologists, and citizens can build a more equitable AI future that prioritizes people over profit. You can download a PDF version of the full report here.

Introduction

As AI systems rapidly reshape society, a critical question looms: who do they serve?

Today’s AI ecosystem is dominated by a handful of commercial actors, reinforcing a narrative that speed, scale, and concentration are inevitable. But this trajectory is not fixed. The infrastructure, governance, and norms we establish now will shape our collective future for decades to come.

Just as industrial-era reformers reimagined factories as engines of shared prosperity, we face a similar inflection point with AI. The technology is too consequential to be guided solely by market logic and quarterly earnings. It demands bold institutional innovation grounded in public values.

Across the globe, a quiet revolution is underway. Technologists, policymakers, and communities are asking not just what AI can do, but what it should do—and for whom. Their work proves that alternative, more equitable AI futures are not only imaginable, but already in motion.

This report maps the emerging field of Public Interest AI: systems designed to serve collective needs over private gain. It is a call to action for funders, governments, practitioners, and citizens to invest in the infrastructure, talent, and governance that will ensure AI strengthens democracy, advances equity, and serves the many—not just the few.

What’s AI in the Public Interest?

Before embarking on our journey its helpful to establish what we mean by public interest AI.

Public Interest AI is not defined by sector or tool but by intention and values. Unlike commercial AI development—which begins with product-market fit—Public Interest AI begins with the question: what does a just and flourishing society need?

Funders we spoke with emphasized the need to reframe their role: not just funding innovation, but shaping its direction. One described this shift as moving from "investing in outcomes" to "investing in the conditions that make good outcomes possible."

This shift requires moving away from extractive models and toward relational, stewarded ecosystems. It also means redefining success metrics—not just reach or revenue, but accountability, community trust, and context-appropriate impact.

Understanding the Moment as a Funder

A Timescale Problem

The Rockefeller Foundation spent decades eradicating hookworm. The Gates Foundation has battled malaria for 25 years. But AI moves on internet time: ChatGPT reached 100 million users in two months. The technological foundations shaping the 21st century are being laid at a pace that challenges funders used to patient capital and gradual change. The sector most in need of long-term thinking is advancing at breakneck speed.

A handful of tech firms and VC-backed startups dominate AI development. Without intervention, this will produce predictable outcomes:

  • Concentration of power among a few firms controlling compute, models, and data.
  • Widening inequality as market-driven AI ignores those without ability to pay—mirroring vaccine access failures.
  • Poor last-mile adoption especially in the Global South, where literacy gaps limit use—even when tools exist.
  • Regulatory capture as policy lags and private actors shape the rules.

Philanthropy and impact investing have a vital role to play. As public funding recedes and commercial incentives fall short, funders can step in—building the infrastructure, talent, and governance to ensure AI serves the many, not just the few.

Challenges for Funders

Our analysis revealed four intersecting challenges that philanthropic funders and impact investors must navigate:

  1. Market Failures: Technology development follows market demand—products are built for those who can pay. But billions stand to benefit from AI despite lacking purchasing power. This creates classic market failures, where innovation bypasses the most pressing global needs.
  2. Short-Termism: AI developed by Big Tech and VC-backed startups is shaped by short-term incentives. Even philanthropy can fall into this trap, with shifting priorities and project-based grants that overlook the long timelines needed to build durable institutions and ecosystems.
  3. Capacity and Infrastructure Gaps: Most foundations weren’t built to assess data pipelines, compute needs, or model governance. Technical expertise remains concentrated in the private sector, and the pace of change makes it hard for funders to even know what questions to ask. As Sherry Huang of the Hewlett Foundation noted, “Most funders have never had to think about technology in their work... there needs to be a lot more education.”
  4. Power Concentration: A small number of corporations control the infrastructure, models, and talent shaping AI. Public interest actors operate on borrowed tools and timelines, with limited influence over priorities or outcomes. Without investment in open infrastructure and community-led innovation, philanthropy risks reinforcing the very power imbalances it seeks to challenge.

Recommendations for Funders

Public Interest AI requires scaffolding—technological, social, and political. Our research surfaced four core areas where philanthropy and impact investing can have lasting impact:

  1. Build Resilient Digital Public Infrastructure: Durable digital infrastructure is the foundation of a healthy AI ecosystem. In addition to digital identity, payments, and registries, AI-specific infrastructure includes high-quality datasets and compute platforms—assets unlikely to be funded by private markets.Philanthropy is well-positioned to support these public goods. For example, Wikipedia—backed by the Wikimedia Foundation—has been essential to training modern large language models. Similarly, Nightingale Open Science, funded by Moore and Schmidt Foundations, is enabling new AI tools for health.To maximize impact:
    • Fund representative datasets especially for underserved populations where data is scarce and commercial incentives are weak.
    • Invest in public compute and model development especially for use cases and regions (e.g. the Global South) ignored by market-driven AI.
  2. Support R&D for Public Interest AI: Philanthropy helped seed the early AI breakthroughs. As major labs have shifted to commercial models, incentives have drifted from public needs. Philanthropy must fill the gap—just as it has in areas like vaccine development. To accelerate this:
    • Invest in use-inspired R&D backing early-stage products that solve specific problems unlikely to attract private capital. Groups like Wadhwani AI show how this can work.
    • Shape the market through independent benchmarks and incentives such as those that spur model innovation aligned with public values. Benchmarks like ImageNet have shaped entire research directions—public-interest benchmarks can do the same.
  3. Build Ecosystem Capacity: Human capital is central to responsible AI adoption. But capacity-building must go beyond engineers—it must include public literacy, policymaker fluency, and funder education.Philanthropy can:
    • Support lifelong AI learning infrastructure especially in regions or systems where public education budgets are shrinking.
    • Educate capital allocators creating pathways for philanthropists, investors, and institutions to learn how to fund AI aligned with the public good.
  4. Support Policymakers and Governance Innovation: Effective regulation must balance innovation with public accountability. Policymakers need tools to prevent harm and counter monopolistic behavior—but they also need trusted networks and adaptive frameworks.Philanthropy can:
    • Fund independent policy research and education
    • Invest in relational infrastructure—the connective tissue that that allows ecosystems to function: bridge roles between communities and institutions, convening platforms for trust-building, and spaces for collaborative sense-making. These are often overlooked but essential for enabling durable, inclusive governance.

Without these investments, governance risks being reactive, fragmented, or captured by private interests.

We have a lot of focus on problem solving, and not a lot of focus on relational infrastructure. But the quality of our relationships determines the quality of our outcomes. - Michelle Shevin, Future Preservation Society

Case Study: Foundation for Civic AI at Cornell Tech

When the COVID-19 pandemic hit New York City, the complexity of the crisis exposed a major flaw in how public agencies accessed and used data. Information was fragmented, modeling capacity was uneven, and community-based organizations had little visibility into how decisions were made—resulting in delays and inequities in everything from vaccine distribution to emergency relief.

To meet this challenge, Cornell Tech partnered with community advocates and city agencies to launch the Foundation for Civic AI. With the backing of a donated NVIDIA supercomputer, the initiative created a public-interest compute and model infrastructure. But what made it distinct wasn’t just the technical capacity—it was the governance model: civic technologists, local officials, and community members worked together to co-design tools that aligned with public values.

The result? A new infrastructure that helped planners anticipate and address inequities in real time—and a replicable governance framework for community-shaped AI. As one collaborator put it, “This wasn’t just tech for good. This was tech with the people who needed it most.”

Learn more about the role of policymakers, technologists, and citizens in shaping the Public Interest AI movement.

Button Text

Written by

Abhilash Mishra

Abhilash Mishra

Founder and Chief Science Officer

Equitech Futures

Abhilash Mishra

Linda Kinning

Linda Kinning

Director of Equitech Ventures

Equitech Futures

Linda Kinning