Responsible AI

Human Rights in a Tech-enabled Future: Public Interest AI for Citizens

Jun 18, 2025

5 min read

This article is part of Equitech Futures' four part series on Public Interest AI focusing on how funders, policymakers, technologists, and citizens can build a more equitable AI future that prioritizes people over profit. You can download a PDF version of the full report here.

Introduction

As AI systems rapidly reshape society, a critical question looms: who do they serve?

Today’s AI ecosystem is dominated by a handful of commercial actors, reinforcing a narrative that speed, scale, and concentration are inevitable. But this trajectory is not fixed. The infrastructure, governance, and norms we establish now will shape our collective future for decades to come.

Just as industrial-era reformers reimagined factories as engines of shared prosperity, we face a similar inflection point with AI. The technology is too consequential to be guided solely by market logic and quarterly earnings. It demands bold institutional innovation grounded in public values.

Across the globe, a quiet revolution is underway. Technologists, policymakers, and communities are asking not just what AI can do, but what it should do—and for whom. Their work proves that alternative, more equitable AI futures are not only imaginable, but already in motion.

This report maps the emerging field of Public Interest AI: systems designed to serve collective needs over private gain. It is a call to action for funders, governments, practitioners, and citizens to invest in the infrastructure, talent, and governance that will ensure AI strengthens democracy, advances equity, and serves the many—not just the few.

What’s AI in the Public Interest?

Before embarking on our journey its helpful to establish what we mean by public interest AI.

Public Interest AI is not defined by sector or tool but by intention and values. Unlike commercial AI development—which begins with product-market fit—Public Interest AI begins with the question: what does a just and flourishing society need?

Funders we spoke with emphasized the need to reframe their role: not just funding innovation, but shaping its direction. One described this shift as moving from "investing in outcomes" to "investing in the conditions that make good outcomes possible."

This shift requires moving away from extractive models and toward relational, stewarded ecosystems. It also means redefining success metrics—not just reach or revenue, but accountability, community trust, and context-appropriate impact.

Understanding the Moment as a Citizen

AI will affect all global citizens, but our experiences will not be equal. In a global poll across 17 countries* citizens expressed wildly different levels of trust and optimism about AI. In India, 78% of respondents said AI has more benefits than drawbacks. In Kenya, Brazil, and Indonesia, similar optimism runs high. But in the U.S., that number drops to just 35%. These numbers reflect more than attitude—they reflect experience. In higher-income countries, AI is often seen as a marginal convenience: a faster email, a quirky chatbot, a better filter. But in the Global South, AI is more often tied to essential infrastructure: it powers drone-delivered medical supplies, multilingual health hotlines, or agricultural guidance in local dialects. As one respondent put it, “AI here means access.”

This reminds us: AI is planetary in reach, but deeply local in impact. It’s not one future we’re shaping—it’s many. And as global citizens, we must hold the truth that AI will affect us differently depending on where we live, what infrastructure we have, and which voices get to shape the rules.

Challenges for Citizens

Our analysis revealed three intersecting challenges that policymakers must navigate:

  1. AI Is Everywhere—But Accountability Isn’t: AI systems now touch nearly every aspect of life: who gets hired, who receives healthcare, how students are evaluated, and even how public resources are distributed. Yet these systems often operate with little transparency or public oversight. Most people don’t know when AI is being used—let alone how to challenge it. Research from Stanford’s Human-Centered AI center, shows that global skepticism about the ethical conduct of AI companies is growing, while trust in the fairness of AI is declining.
  2. The Narrative Feels Outsourced: Most public conversations about AI are either hype-driven (“it will solve everything”) or doom-laden (“it will destroy us”), with little space for grounded, local perspectives. Ethics panels and expert debates are important—but often leave everyday people feeling like spectators. When the story of AI is told by a few, it excludes the imaginations, values, and concerns of the many.
  3. Imagination Isn’t Taken Seriously: We’re often told that AI is inevitable, abstract, or too complex for non-experts. But that’s a false story. In fact, what’s missing isn’t technical capacity—it’s public imagination. What if communities got to decide what questions AI should answer? What if safety, care, and local relevance were treated as design constraints?

Recommendations for Citizens

These four pathways offer unique approaches to respond to challenges that lie ahead:

  1. Experiment—and Learn by Doing: The best way to understand AI’s power, limits, and risks is to use it yourself. Try a chatbot, generate an image, test out tools for writing, coding, translation, or accessibility. Explore what they get right—and what they get wrong. Notice where bias shows up. Observe what’s easy, and what’s hidden. We are all, whether we like it or not, part of a global beta test. But that doesn’t mean you’re powerless. As a user, you can collect your own data, develop a critical lens, and help shape public conversations with grounded experience. Becoming AI-literate isn’t just a tech skill—it’s civic preparedness.
  2. Ask Better Questions, Close to Home: AI isn’t just a global issue—it’s also deeply local. School districts are purchasing AI grading tools. Local police may be using predictive systems. Hospitals are deploying chatbots and triage models. Ask questions at town halls, school board meetings, union gatherings, PTA events, or neighborhood associations. Not accusatory questions—curious ones:
    • Is AI being used here?
    • How are decisions being made?
    • Who gets to review or contest them?
  3. Claim Your Agency—Individually and Collectively: Nothing about AI’s trajectory is inevitable. Its impact depends on how we design, use, and govern it. As a citizen, you can shape the future not only through policy, but through your values, your voice, and your relationships. Start with what matters most to you.  Choose one issue and begin there. Join a coalition, organize a reading group, support a watchdog organization, or host a community dialogue. Collective action—grounded in care, creativity, and justice—is how better futures are built.
  4. There Is No Opting Out, we can only move forward: The esteemed 20th century media theorist, Marshall McLuhan wrote that “we shape our tools, and thereafter our tools shape us.” AI is no exception. Even if you never touch a chatbot or download an app, AI is already shaping the systems around you: what jobs are available, what news you see, how your insurance is priced, or whether your child’s homework is flagged by an algorithm. With AI, opting out isn’t really an option. The technology is here, and it’s changing us, whether we engage or not. So the real question is not “Should I care?”—it’s “How will I respond?” Start by noticing:
    • How is AI affecting your daily life?
    • Who around you is being helped—or harmed—by these tools?
    • Where is power shifting in your workplace, community, or city?

That kind of attention isn’t passive—it’s political. It’s the starting point for pushing back, shaping forward, and reclaiming civic agency in the age of AI.

Case Study: Afro-Feminist AI Governance in Uganda

In Kampala, Uganda, the civic tech organization Pollicy is showing what community-led AI governance can look like. Rather than wait for regulation to trickle down, they’ve built tools and frameworks that invite African women into the design and decision-making processes around AI systems.

One of their flagship efforts is an afro-feminist governance model for AI—a framework that pushes beyond abstract ethics to ask: whose rights are being prioritized? Whose experiences shape the boundaries of harm? And what does consent look like when data is extracted from communities with little digital protection?

Through workshops, policy labs, and art-infused convenings, Pollicy brings together policymakers, technologists, and citizens to co-create context-sensitive guidelines for AI deployment. “Building trust with communities is essential,” says Bobina Zulfa. “We have to invert who gets to define what counts as harm—and who gets to imagine the future.”

This isn’t just public education—it’s public governance. And it shows that everyday people, especially those most often excluded, have the vision and tools to co-lead AI futures.

Learn more about the role of funders, policymakers, and technologists in shaping the Public Interest AI movement.

Button Text

Written by

Abhilash Mishra

Abhilash Mishra

Founder and Chief Science Officer

Equitech Futures

Abhilash Mishra

Linda Kinning

Linda Kinning

Director of Equitech Ventures

Equitech Futures

Linda Kinning