Responsible AI

Reclaiming the Role of the Builder: Public Interest AI for Technologists

Jun 18, 2025

5 min read

This article is part of Equitech Futures' four part series on Public Interest AI focusing on how funders, policymakers, technologists, and citizens can build a more equitable AI future that prioritizes people over profit. You can download a PDF version of the full report here.

Introduction

As AI systems rapidly reshape society, a critical question looms: who do they serve?

Today’s AI ecosystem is dominated by a handful of commercial actors, reinforcing a narrative that speed, scale, and concentration are inevitable. But this trajectory is not fixed. The infrastructure, governance, and norms we establish now will shape our collective future for decades to come.

Just as industrial-era reformers reimagined factories as engines of shared prosperity, we face a similar inflection point with AI. The technology is too consequential to be guided solely by market logic and quarterly earnings. It demands bold institutional innovation grounded in public values.

Across the globe, a quiet revolution is underway. Technologists, policymakers, and communities are asking not just what AI can do, but what it should do—and for whom. Their work proves that alternative, more equitable AI futures are not only imaginable, but already in motion.

This report maps the emerging field of Public Interest AI: systems designed to serve collective needs over private gain. It is a call to action for funders, governments, practitioners, and citizens to invest in the infrastructure, talent, and governance that will ensure AI strengthens democracy, advances equity, and serves the many—not just the few.

What’s AI in the Public Interest?

Before embarking on our journey its helpful to establish what we mean by public interest AI.

Public Interest AI is not defined by sector or tool but by intention and values. Unlike commercial AI development—which begins with product-market fit—Public Interest AI begins with the question: what does a just and flourishing society need?

Funders we spoke with emphasized the need to reframe their role: not just funding innovation, but shaping its direction. One described this shift as moving from "investing in outcomes" to "investing in the conditions that make good outcomes possible."

This shift requires moving away from extractive models and toward relational, stewarded ecosystems. It also means redefining success metrics—not just reach or revenue, but accountability, community trust, and context-appropriate impact.

Challenges for Technologists

Our analysis revealed three intersecting challenges that technologists must navigate:

  1. Building for Context in a Culture of Abstraction: Most AI failures are not technical—they're contextual. Models that perform well in lab environments often fail when deployed in the real world, particularly among underserved populations. Yet industry norms still prioritize benchmark performance over local relevance, edge-case safety, or environmental constraints. Technologists face pressure to generalize, abstract, and scale—at the cost of grounded, community-informed design.
  2. Designing for Contestability in Systems That Obscure: Too many AI systems offer no path to challenge, understand, or correct their outputs. Whether in hiring, lending, or education, users often have no visibility into how decisions are made—or how to appeal them. Technologists are rarely incentivized to design for contestability, yet it is core to both user trust and democratic accountability. Without visibility and redress, flawed systems don't just fail—they disempower.
  3. Working Without Supportive Infrastructure: Even the most values-aligned developers struggle to build ethically if the infrastructure—compute, data, models, and tooling—is inaccessible or misaligned. When infrastructure is proprietary, high-cost, or optimized for commercial efficiency, public-interest builders are constrained in what they can imagine or deliver. Technologists need open, well-governed tools that reflect a diversity of needs and contexts.

Recommendations for Technologists

These four pathways offer unique approaches to respond to challenges that lie ahead:

  1. Build With Context, Not Just Scale: Not all AI problems are model problems. Many are mismatches between the technology and the people or environments it's meant to serve. Designing for context means resisting the impulse to abstract everything into a “general use case.” It requires deliberate attention to how language, culture, geography, and infrastructure shape real-world performance. Don’t optimize only for benchmarks—optimize for people. That means working with affected communities to define what success looks like in their world, not just yours. And it means treating edge cases as a feature of inclusive design, not a statistical nuisance.
  2. Design for Contestability—Not Just Function: Imagine building a system where no one can appeal a mistake, trace an error, or ask why something happened. Now imagine that system makes decisions about jobs, education, or healthcare. That’s the status quo in far too many AI deployments. Technologists can shift this norm by embedding contestability directly into design. Make your systems explainable to humans—not just machines. Include logic paths, override options, and avenues for appeal—especially in high-stakes domains.What this might look like in practice:
    • Document decisions in ways that non-specialists can trace
    • Design UX elements that allow users to question or override outputs
    • Partner with domain experts and affected groups to model how harm might occur—and how users might respond
    Contestability isn’t a bonus feature—it’s what makes AI accountable to the people it affects.
  3. Contribute to the Infrastructure You Wish Existed: You’re not just building products—you’re helping define the technical landscape future builders will inherit. When proprietary tools dominate, public-interest experimentation narrows. But when open infrastructure is available—shared models, transparent data, community-governed compute—new possibilities emerge. Look for ways to contribute back: maintain open-source code, share benchmarks, help steward ethically sourced datasets. Choose licenses that reflect care for how tools are used downstream. And when you can, design for the edges: low-resource environments, multilingual use, modularity.
  4. Don’t Go It Alone—Find a Community of Practice: Building public interest AI can feel isolating, especially inside institutions where speed, scale, or profit dominate the agenda. But you’re not alone—and you shouldn’t have to be. Connecting with others who share your values is more than a moral boost—it’s strategic infrastructure. Communities of practice provide code reviews, ethical sounding boards, shared tools, and the courage to ask harder questions. Whether it’s a local civic tech meetup, an open-source collective, or a cross-disciplinary working group, these networks turn isolated builders into collective stewards. Places to look:
    • Public interest tech initiatives (like The Tech We Want, Mozilla Builders, or Code for America)
    • Open-source communities that emphasize equity, accessibility, or responsible AI licensing
    • Interdisciplinary research collaboratives and fellowships
    • Alumni networks from civic-minded bootcamps, universities, or policy labs

Case Study: Building Voice-First Infrastructure for India’s AI Future

In India, where over 1600 languages and dialects are spoken and digital literacy remains uneven, traditional AI interfaces—often English and text-based—exclude millions. People+AI recognized that this wasn’t just a UX problem. It was a justice problem.

Partnering with local organizations and startups like Sarvam AI, People+AI has been laying the technical and governance foundations for voice-enabled public services. This work includes building multilingual speech recognition pipelines, supporting modular agentic workflows, and designing consent-centered data collection strategies. The goal? To create a voice-first infrastructure layer that could underpin everything from healthcare access to education and financial inclusion.

Rather than chasing the latest frontier model, the team prioritized context: “Voice bots in local dialects aren’t just a feature—they’re a precondition for access,” said David Menezes. Their approach balances technical experimentation with deep engagement in rural communities, treating interface design as civic architecture—not just user flow.

This isn’t just an engineering success—it’s a public interest blueprint for AI built with and for the people most often left out of innovation cycles.

What Are Public Interest Values?

Before you can design for the public interest, you have to define what it means. Across dozens of interviews and field insights, a consistent set of values emerged. These values can serve as guideposts in technical decision-making, architecture choices, and team alignment.

Here’s a framework to carry into your work:

  • Equity: Design systems that explicitly account for historical disadvantage and structural bias. This may mean tuning for edge cases—not just majority use cases.
  • Transparency: Log decisions. Document data pipelines. Make model behavior visible and interrogable by other humans—not just other machines.
  • Participation: Include affected communities not just in feedback loops, but in problem framing and decision-making. Participation isn’t a user test—it’s co-design.
  • Accountability: Design systems with built-in contestability. Can someone challenge a decision? Trace an error? Recover from harm?
  • Contextual Responsiveness: Resist one-size-fits-all abstractions. Good AI adapts to cultural, linguistic, environmental, and legal realities—not the other way around.
  • Collective Benefit: Optimize for public value, not just efficiency. Ask: who benefits? Who bears the risk? And who gets to decide?

Designing for Context and Contestability

Most AI failures aren’t technical—they’re contextual. A model may function well in lab conditions or on benchmark datasets, but fail catastrophically when exposed to diverse real-world environments—especially those it was never designed to understand. Designing for context means anticipating those failures and building systems that account for diversity, edge cases, and plural definitions of success.

It also means building contestability into the system. Contestability is the ability for people to question, challenge, or appeal decisions made by an AI system. Can a teacher contest an AI’s grading of student performance? Can a job applicant understand why they were rejected by an automated screener—and request a review? If not, the system isn’t just flawed—it’s undemocratic.

Design for:

  • Local relevance (e.g., dialect, device constraints, bandwidth variability)
  • Edge-case safety not just average-case optimization
  • User contestability including human override, appeals processes, and visible logic paths
  • Feedback integration loops that close the gap between usage and model iteration

As Michelle Shevin reminded us, “Bringing people together who share in the impacts of AI—rather than centering AI—is where we’ll build real value.”

Infrastructure for Public Interest Development

Even the most values-aligned engineers hit a wall if the tools they need aren’t accessible. Public interest technologists shared how access to infrastructure—compute, models, data—often shapes what they can build, and who they can build for.

One powerful example came from our interview with Vilas Dhar of the Patrick J. McGovern Foundation. He described how Indigenous communities in the Amazon are using AI to monitor deforestation via satellite imagery. But the success of that project didn’t hinge on novel model architecture. It hinged on how the system was built—with community consent, co-governance, and clear accountability. The technologists in this project acted not just as builders but as stewards: translating community priorities into technical design decisions.

This is what it looks like when infrastructure supports—not overrides—local agency.

Technologists can:

  • Contribute to and maintain open-source, responsibly trained models
  • Use data governance frameworks that protect rights and prioritize consent
  • Build tools optimized for low-resource or multilingual environments
  • Choose licenses and deployment methods that reflect public values

Learn more about the role of funders, policymakers, and citizens in shaping the Public Interest AI movement.

Button Text

Written by

Abhilash Mishra

Abhilash Mishra

Founder and Chief Science Officer

Equitech Futures

Abhilash Mishra

Linda Kinning

Linda Kinning

Director of Equitech Ventures

Equitech Futures

Linda Kinning