By Christina Maas – Reclaim The Net
A once-classified federal strategy paper has surfaced, pulling back the curtain on how the Biden administration planned to address domestic terrorism. Released by Director of National Intelligence Tulsi Gabbard after legal pressure from America First Legal (AFL), the document shows a government effort that stretches far beyond traditional national security work.
We obtained a copy of the documents for you here.
The 15-page plan, dated June 2021, outlines a series of objectives aimed at curbing domestic extremism. What’s caught critics’ attention, however, is how broadly the strategy defines the threat. Violence is only part of the concern. The rest seems focused on speech, ideology, and the online flow of information.
AFL sounded the alarm in an April 2 letter, accusing the administration of turning federal power inward. The group warned that officials were labeling “disfavored views” as “misinformation,” “disinformation,” or “hate speech” and then moving to suppress them under the banner of national security. The letter called it an attempt to “weaponize” the government against its own citizens.
We obtained a copy of the letter for you here.
Tulsi Gabbard responded on April 5, thanking AFL “for your work” and promising action. “We are already on this,” she said, “and look forward to declassifying this and other instances of the government being weaponized against Americans.” She pledged to restore “transparency and accountability” across the intelligence community.
The plan has four stated goals. The administration aimed to improve intelligence collection, disrupt radicalization, deter attacks, and address long-term factors contributing to domestic extremism. The language is clean, familiar, and built for press releases. The deeper issue is how those goals translate into action.
The plan promotes aggressive collaboration with private-sector partners, especially tech companies. These firms are encouraged to work closely with federal agencies, sharing data and identifying online threats. On paper, it sounds like public-private cooperation. In practice, it looks like the quiet institutionalization of speech surveillance.
Beyond surveillance, the document calls for sweeping educational initiatives. Federal agencies would lead digital and civic literacy campaigns, tailored not just to adults but children as well. These programs aim to train Americans in spotting “disinformation” and consuming government-approved content with the right mindset.
The core problem is the looseness of the definitions. “Misinformation” is not a stable category. It shifts with political context, media cycles, and official narratives. When the government starts guiding information flow based on ideological judgments, the line between counterterrorism and censorship begins to vanish.
AFL’s warning points to a larger issue. National security tools were designed to target violence, foreign threats, and coordinated plots. If those tools are redirected toward political speech or cultural dissent, constitutional protections come under direct pressure.
The plan does not call for explicit censorship, but it outlines a framework where suppression can operate quietly. Social platforms adjust algorithms, flag certain content, or apply filters, all while citing guidance from federal agencies. The result is a system of influence without direct orders, and coercion without fingerprints.
There is a real danger in normalizing these tactics. Speech becomes suspect. Criticism becomes radical. Dissent is confused with extremism. The government doesn’t need to shut anyone down when it can simply shape the environment in which opinions appear, thrive, or disappear.
What this document shows is not an overt crackdown. It reveals a softer, more technical form of control, driven by partnerships, filtered through educational campaigns, and masked as public safety. The language is bureaucratic. The effect is cultural.
One especially controversial component involved tying the United States into international speech governance schemes, including the Christchurch Call. Born from the horror of a mass shooting in New Zealand, the initiative originally focused on tackling online extremism. Since then, it has morphed into a global bureaucratic vehicle for content control. The previous Trump administration gave it a pass, citing the First Amendment. Biden’s team, however, embraced it, assigning the National Security Council and the State Department to get in the game.
The pitch was familiar. Join hands with the world, build resilient democracies, and pay lip service to respecting freedom of expression, just as long as that expression doesn’t offend the wrong algorithm. The declassified plan made no secret of its enthusiasm for these frameworks, even as other countries used them to pass actual censorship laws. The message was subtle: the best way to protect speech is to manage it.
Then came the Global Internet Forum to Counter Terrorism, GIFCT for short. It sounds like something out of a Tom Clancy novel, but it’s very real and very powerful. A cross-border partnership of Big Tech and government outfits, GIFCT operates a shared hash database used to flag and delete content before human eyes ever see it. It’s fast, opaque, and thoroughly insulated from public oversight.
Researchers and independent journalists have tried to access this database. They were denied. The same goes for civil liberties groups. Yet the database has reportedly swept up not just terrorist propaganda but satire, news reports, and dissenting opinions that diverge from mainstream policy positions. You don’t have to support violence to get swept into the net. You just have to be inconvenient.
The Biden strategy praised GIFCT and called for more of the same. More partnerships, more coordination, more invisible lines between state security and corporate enforcement. Intelligence sharing would increase both domestically and abroad, with federal agencies told to tighten cross-border surveillance and gather foreign intel that might “connect” to domestic threats. That’s a generous term, “connect.” It can mean almost anything with the right briefing memo.
Federal law enforcement would also be looped into new financial intelligence pipelines, a polite way of saying banks might one day help track the digital footprint of your unpopular opinion. The plan took care to frame all of this as “preventative,” a word that functions as a bureaucratic perfume for surveillance.
What really stretched the logic was how far this strategy drifted from any obvious definition of terrorism. Sections were devoted to civic engagement, voter turnout, and even pandemic responses. These were presented as social resilience measures. The theory was that if people vote more, wash their hands, and feel included, they’ll be less likely to fall into extremism. Whether or not there’s evidence for that, the document didn’t say. But it did make clear that nearly any policy goal could now be rebranded as counterterrorism.
The real sleight of hand comes with the plan’s fixation on “disinformation.” Over and over, the term is deployed without being defined. Page 5 hands the FBI, CIA, DHS, and the State Department marching orders to investigate how foreign disinformation might influence American minds. What it doesn’t clarify is where the boundary lies between propaganda and political argument, between foreign influence and domestic skepticism. That line, too, can be moved at will.
By page 7, the program graduates from intelligence analysis to educational programming. A national rollout of digital literacy campaigns is spelled out in detail, assigning DHS, the Department of Education, USAID, and others to push federally approved messaging at the local level. The idea is to help people “navigate” online spaces. In practice, it looks a lot like federal agencies getting a direct lane into classrooms, community groups, and civic organizations to shape which narratives are trustworthy.
These campaigns are billed as tools of empowerment. What they create, though, is a top-down apparatus for regulating online conversation by shaping the boundaries of acceptability.
When language this broad gets institutionalized, it becomes part of the infrastructure. Programs rarely get rolled back. Mission creep becomes permanent. And when something as malleable as “disinformation” becomes a national security threat, the tools to fight it won’t stop at the fringe. They’ll move inward. Quietly. Bureaucratically. Irrevocably.
By page 9, the language sharpens. The strategy calls for an active partnership between federal agencies and online platforms, encouraging routine sharing of flagged content. The mission is to detect and neutralize “terrorist content,” a term so pliable that even basic political commentary has ended up on the wrong side of the filter.
The FBI, DHS, and the National Counterterrorism Center were chosen to take point. Their job isn’t just enforcement. It’s cultivation; nurturing closer ties with Silicon Valley in what amounts to a permanent public-private intelligence network.
The strategy also lays out an international expansion plan. Federal agencies are told to deepen participation in “global, multilateral fora.” That’s polite language for joint operations with foreign governments and tech firms, where content policies are hashed out away from the meddling eyes of the public.
The plan calls for a full-spectrum expansion of intelligence sharing across government layers. Page 3 instructs the FBI to enhance coordination not only within federal ranks but also with state and local law enforcement. This expanded information flow creates a vertical system where federal narratives can be replicated across jurisdictions, reinforced by shared intelligence and standardized risk profiles.
Page 4 takes that framework overseas. Agencies are told to improve collaboration with foreign partners, gathering external intelligence that can be connected to domestic targets. The State Department, DOJ, CIA, and FBI were told to integrate international and local threat monitoring into a unified lens. This creates a structure where loosely affiliated actors can be networked together, often through circumstantial or ideological association, rather than clear operational ties.
Page 11 outlines financial tracking as the next frontier. The plan calls for formal systems to share financial intelligence with law enforcement, making it easier to trace funding and activity patterns. These tools, when applied to terrorism, make sense. When applied to speech and political behavior, they become something else entirely.
The cumulative effect is the construction of a surveillance apparatus built not just to intercept violence, but to anticipate and respond to narratives the government finds destabilizing. The infrastructure isn’t theoretical. It’s live.
The declassified plan signals a redefinition of the enemy. Where past counterterrorism efforts focused on individuals preparing for violence, this version preempts ideas. It recasts expression as risk, and participation in certain conversations as potential radicalization.
By embedding national security functions in tech companies, international alliances, and education programs, the administration constructed a framework that treats public discourse as a battlefield and information as a threat vector. This is not about stopping the next Oklahoma City. It is about managing perception.
The document wraps all of this in the language of democratic preservation. It frames speech controls as tools for resilience. But when unelected boards, private algorithms, and international panels are setting the rules for what Americans can say and see, democracy becomes a set piece, not a system.
Open debate requires risk. It tolerates bad takes, misinformation, and sometimes even malicious speech because the alternative is worse. A system where speech is filtered through government-approved channels in the name of public safety doesn’t prevent tyranny. It makes room for it.