Perspectives

From AI Safety to AI Security: What We Lose in Translation

Nov 19, 2025

5 min read

Words matter.

Words hold power. Words shape discourse. Words mobilize.

Between 2023 and 2025, discussions around artificial intelligence (AI) underwent a subtle yet profound change in terminology from AI safety to AI security. While many may use these words interchangeably, they are, in fact, fundamentally different. They trigger different emotions, advance different priorities, and serve different agendas. To call something a matter of security is to securitize it. And to securitize an issue is to take it from the realm of normal politics, where citizens can debate, influence, and hold leaders accountable, to the realm of exceptional, emergency politics, where democratic processes are sidelined in favor of whatever the state deems necessary to defend its national security. In this shift, citizens cease to be participants and instead become objects of security. Their fears are weaponized, their access to critical information is curtailed, and their rights are quietly bargained away, all in the name of protection from some elusive threat.

The Senate Hearings: A Case Study

To see this reorientation in action, walk with me down Senate hearings lane, where the evolving language between American legislators and OpenAI CEO Sam Altman lays it bare. Sam Altman appeared twice before the Senate: once in May 2023 and again in May 2025. In the 2023 testimony, AI safety was the focal point, dominating the discussion and mentioned 29 times. In contrast, securitized language made a modest appearance with security mentioned five times, never once in the context of national security, and the strategic rival, China, barely cleared six. By his 2025 testimony, the vocabulary surrounding AI had, to the skeptical eye, undergone a striking transformation: security took center stage with thirty-seven mentions, safety demoted to six, and China became the leitmotif with forty-two. In two short years, discussions of AI, from the public and the private perspectives, shifted from keeping the technology safe, to defending the country against a national security threat. The words changed, and with it, so did the politics.

But why does this matter? Allow me to present three key concerns.

Blurring the Lines Between Military and Civic

Firstly, when we securitize an issue, the lines between the military and the civic, the domestic and the foreign, begin to dissolve. “National Security” becomes a catch-all justification for state action. Once the label is fixed, actions taken to address it no longer need public explanation. The term itself becomes the legitimizer, as long as the audience, the object of security, accepts the framing. Historically, a similar linguistic maneuver emerged during the Cold War. American officials deliberately reframed “security” as national and state security, replacing the previously favored military term, defense. The move was strategic: the war effort demanded a fusion of military and civilian activities, and blurring those boundaries became essential. Unlike “defense,” which carried a strictly geopolitical and military connotation, “security” could be invoked more broadly to rally the public against threats both foreign and domestic. As this instance demonstrates, the dismantling of such distinctions began with the seemingly benign act of choosing a different word.

The Erosion of Individual Liberty

Secondly, there is a real tension between security and individual liberty. History illustrates this pattern well: the Patriot Act after 9/11, the COVID‑19 surveillance creep, and the UK’s “Snoopers’ Charter” all extended state oversight in the name of security. These policies, like many others, were born in moments when a threat, or a threatening actor, endangered national security. In exceptional times, exceptional measures are tolerated. And one can understand such tolerance. The trouble begins when the moment passes but the measure remains. What was once temporary and extraordinary becomes ordinary, and societies adjust, almost imperceptibly, their expectations of liberty.

Preemptive Securitization and Corporate Capture

Third, some issues genuinely warrant securitization: nuclear weapons, national defense, and border protection, just to mention a few. However, the securitization under discussion here is of a different sort: a preemptive framing of issues that pose no existential risk in the real sense of the word. Where we find ourselves today is in an era of over-securitization. “National Security” is invoked less as a shield against real danger and more as a tool to stoke fear, justify extraordinary measures, and rally public compliance.

AI complicates this dynamic further. Unlike nuclear technologies, developed in government labs under strict state oversight, AI is largely controlled by a handful of private companies whose primary allegiance is to shareholders, not the state, and certainly not its citizens. Yet these corporations have become key securitizing actors, framing AI as a national security threat to consolidate power, justify reckless innovation, and sharpen their competitive edge against both domestic and foreign rivals.

OpenAI's Securitized Playbook

Consider OpenAI's letter to the Office of Science and Technology Policy. Blatantly overusing securitized language, the letter opens with: “As America’s world-leading AI sector approaches artificial general intelligence (AGI), with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s new AI Action Plan can ensure that American-led AI, built on democratic principles, continues to prevail over CCP-built autocratic, authoritarian AI.”

From there, the letter reads less like a policy recommendation and more like a corporate wish list. OpenAI urges the government to fast-track facility clearances for frontier AI labs “committed to supporting national security,” ease compliance with federal security regulations for AI tools, and accelerate AI testing, deployment, and procurement across federal agencies. It also calls on Washington to tighten export controls on OpenAI’s competitors, promote the global adoption of “American AI systems,” and, predictably, relax regulations on privacy, data ownership, and intellectual property.

The message is clear: if regulations are not loosened and approvals not sped up, the U.S. will undoubtedly fall behind China, placing its national security at grave risk. This kind of rhetoric, so bluntly designed to manipulate public perception and justify allowing private AI companies to “move fast and break things” under the looming threat of the communist boogeyman, is profoundly dangerous. If the bar for what counts as a national security issue continues to drop, our capacity to act as informed, engaged citizens will steadily erode.

Reclaiming Democratic Agency

What, then, can we do?

Well, we can start by bringing this linguistic maneuver into the light and exposing it for what it is: a rhetorical battering ram designed to bulldoze oversight, accountability, and our individual liberties. Getting familiar with the ways in which we are being manipulated by both state and corporate actors who capitalize on fear, is the first step toward reclaiming our democratic agency. When we recognize how the language of security is used to shut down debate, limit transparency, and fast-track harmful policies, we can begin to resist its creep into the vocabulary of everyday governance. Public awareness, in moments like this, becomes both our safeguard and our strongest countermeasure against manufactured urgency.

That said, vigilance must follow awareness. Protecting the mechanisms that give us a say in how power and resources are allocated necessitates that we reject emergency logic as the baseline for policymaking. We must insist that policy is debated openly and that fear never stands in for evidence.

So, the next time you hear the words “National Security” invoked in whatever context, but especially around AI, pause and ask: Whose security? Against what threat? And at what cost?

This piece is an op-ed by Equitech Scholar Laila Shaheen as part of the Equitech Futures Institute Oxford in July 2025.

Written by

Laila Shaheen

Equitech Scholar

Canada and Syria

Equitech Futures

Laila Shaheen

More articles