For two years, Hannah Byrne was part of an invisible machine that determines what over 3 billion people around the world can say on the internet. From her perch within Meta’s Counterterrorism and Dangerous Organizations team, Byrne helped craft one of the most powerful and secretive censorship policies in internet history. Her work adhered to the basic tenet of content moderation: Online speech can cause offline harm. Stop the bad speech — or bad speakers — and you have perhaps saved a life.
In college and early in her career, Byrne had dedicated herself to the field of counterterrorism and its attempt to catalog, explain, and ultimately deter non-state political violence. She was most concerned with violent right-wing extremism: neo-Nazis infiltrating Western armies, Klansmen plotting on Facebook pages, and Trumpist militiamen marching on the Capitol.
In video meetings with her remote work colleagues and in the conference rooms of Menlo Park, California, with the MAGA riot of January 6 fresh in her mind, Byrne believed she was in the right place at the right time to make a difference.
And then Russia invaded Ukraine. A country of under 40 million found itself facing a full-scale assault by one of the largest militaries in the world. Standing between it and Russian invasion were the capable, battle-tested fighters of the Azov Battalion — a unit founded as the armed wing of a Ukrainian neo-Nazi movement. What followed not only shook apart Byrne’s plans for her own life, but also her belief in content moderation and counterterrorism.
Today, she is convinced her former employer cannot be trusted with power so vast, and that the systems she helped build should be dismantled. For the first time, Byrne shares her story with The Intercept, and why the public should be as disturbed by her work as she came to be.
Through a spokesperson, Meta told The Intercept that Byrne’s workplace concerns “do not match the reality” of how policy is enforced at the company.
Good Guys and Bad Guys
Byrne grew up in the small, predominantly white Boston suburb of Natick. She was 7 years old when the World Trade Center was destroyed and grew up steeped in a binary American history of good versus evil, hopeful she would always side neatly with the former.
School taught her that communism was bad, Martin Luther King Jr. ended American racism, and the United States had only ever been a force for peace. Byrne was determined after high school to work for the CIA in part because of reading about its origin story as the Nazi-fighting Office of Strategic Services during World War II. “I was a 9/11 kid with a poor education and a hero complex,” Byrne said.
And so Byrne joined the system, earning an undergraduate degree in political science at Johns Hopkins and then enrolling in a graduate research program in “terrorism and sub-state violence” at Georgetown University’s Center for Security Studies. Georgetown’s website highlights how many graduates from the Center go on to work at places like the Department of Defense, Department of State, Northrop Grumman — and Meta.
It was taken for granted that the program would groom graduates for the intelligence community, said Jacq Fulgham, who met Byrne at Georgetown. But even then, Fulgham remembers Byrne as a rare skeptic willing to question American imperialism: “Hannah always forced us to think about every topic and to think critically.”
Part of her required reading at Georgetown included “A Time to Attack: The Looming Iranian Nuclear Threat,” by former Defense Department official Matthew Kroenig. The book advocates for preemptive air war against Iran to end the country’s nuclear ambitions. Byrne was shocked that the premise of bombing a country of 90 million — presumably killing many innocent people — to achieve the ideological and political ends of the United States would be considered within the realm of educated debate and not an act of terrorism.
That’s because terrorism, her instructors insisted, was not something governments do. Part of terror’s malign character is its perpetration by “non-state actors”: thugs, radicals, militants, criminals, and assassins. Not presidents or generals. Unprovoked air war against Iran was within the realm of polite discussion, but there was never “the same sort of critical thinking to what forms of violence might be appropriate for Hamas” or other non-state groups, she recalls.
As part of her program at Georgetown, Byrne studied abroad in places where “non-state violence” was not a textbook topic but real life. Interviews with former IRA militants in Belfast, ex-FARC soldiers in Colombia, and Palestinians living under Israeli occupation complicated the terrorism binary. Rather than cartoon villains, Byrne met people who felt pushed to violence by the overwhelming reach and power of the United States and its allies. Wherever she went, Byrne said, she met people victimized, not protected by her country. This was a history she had never been taught.
Despite feeling dismayed about the national security sector, Byrne still harbored a temptation to fix it from within. After receiving her master’s and entering a State Department-sponsored immersion language class in India, still hopeful for an eventual job at the CIA or National Security Agency, she got a job at the RAND Corporation as a defense analyst. “I hoped I’d be able to continue to learn and write about ‘terrorism,’ which I now knew to be ‘resistance movements,’ in an academic way,” Byrne said. Instead, her two years at RAND were focused on the traditional research the think tank is known for, contributing to titles like “Countering Violent Nonstate Actor Financing: Revenue Sources, Financing Strategies, and Tools of Disruption.”
“She was all in on a career in national security,” recalled a former RAND co-worker who spoke to The Intercept on the condition of anonymity. “She was earnest in the way a lot of inside-the-Beltway recent grads can be,” they added. “She still had a healthy amount of sarcasm. But I think over time that turned into cynicism.”
Unfulfilled at RAND, Byrne found what she thought could be a way to both do good and honor her burgeoning anti-imperial politics: Fighting the enemy at home.
Deciding to focus on combating the threat of white supremacists, she took on a job that allowed her to target this specific group. Working at Facebook, she saw the aftermath of a mob influenced by white supremacist rhetoric storming the U.S. Capitol. She believed that Facebook’s policy team would enable her to combat this real danger in a way that the federal national security establishment wouldn’t. However, she soon realized that she had essentially joined a microcosm of the national security state.
Joining Meta in September 2021, she helped create the rulebook that governs the world’s most dangerous individuals and organizations, such as the Ku Klux Klan and terrorists. While Meta bans these entities from using its platforms, it also prohibits users from glorifying, supporting, or representing them. Byrne’s job was to keep these dangerous organizations off Meta’s properties and prevent their messages from spreading online and into the real world. However, the ambiguity of the terms led to controversy and over-enforcement of the Dangerous Organizations and Individuals (DOI) policy.
She discovered that the secret list of banned entities obtained by The Intercept in 2021 primarily consisted of Muslim, Arab, and southeast Asian groups, aligning closely with U.S. foreign policy interests. A third-party audit in 2022 found that Meta violated the human rights of Palestinian users due to over-enforcement of the DOI policy. The company’s Oversight Board frequently overturned content removals and urged Meta to disclose more information about the list and its usage.
Despite her efforts to focus on blocking violent white supremacists, she realized that Meta’s censorship systems were heavily influenced by government policies. The company’s content moderation policy mirrored State Department decisions, creating an intimate connection between the two. Byrne also questioned Meta’s ability to maintain an accurate terror roster without ongoing checks or evidence. The flexibility of the blacklist became apparent when Azov, a previously banned white supremacist group, was removed from the list as they became freedom fighters against Russian occupation.
As the Counterterrorism and Dangerous Organizations policy manager, Byrne witnessed the shifting perceptions of Azov from neo-Nazis to freedom fighters, leading to policy changes allowing praise of their actions. This sudden reversal raised concerns among Byrne and her colleagues, who questioned the legitimacy of this transformation. Byrne mentioned that despite having gathered photographic evidence contradicting Meta’s claims, the company responded by stating that although Azov may have had Nazi sympathies in recent years, posts violating company rules had decreased enough. The situation seemed unfair as they were allowed to argue for the Azov Battalion to remain blacklisted but had to gather evidence from Facebook, a platform that Azov fighters were not supposed to use. Byrne noted that Facebook struggled to keep neo-Nazi content off its platform, which made it difficult to find evidence online. She suspected that the decision to delist the Azov Battalion was politically motivated to support the U.S.-backed war effort.
Byrne was concerned that this decision would increase white supremacist violence and contribute to spreading propaganda. She was shocked by how quickly the group was exempted and allowed praise once more. She felt that this decision would directly lead to violence. Meta defended its decision by stating that the Azov unit had reformed and no longer met its standards for designation.
Byrne also recalled a similar frustration with Meta’s blacklisting of factions fighting Assad’s government in Syria but not the government itself. Meta confirmed that its definition of terrorism did not apply to nation states.
At the beginning of her job, Byrne believed that right-wing extremism was a top priority for the company, but she struggled to get resources for addressing neo-Nazi content. The Azov exemption happened quickly, unlike other efforts. Meta disputed Byrne’s objections to its Dangerous Organizations policy, stating that their policies were comprehensive and designed to stop violence and hate on their platforms.
Despite the Azov situation, it was not the reason Byrne left her counterterror career. After the Capitol attack, Meta faced challenges in profiling individuals likely to participate in such events. “It’s an ideology that resides in your mind.”
What if the company could prevent the recruitment of individuals into extremist groups like the Proud Boys, Three Percenters, or ISIS? This was the task assigned to Byrne, who worked on Meta’s Malicious Actor Framework to identify individuals prone to dangerous behavior. The framework aimed to stop real-world violence by detecting and preventing these individuals from organizing and connecting with like-minded people on Meta’s platforms.
However, the concept of identifying potential sympathizers of terrorism raised questions about what it truly means to be a sympathizer. The approach followed by Meta reflected the Obama-era framework of Countering Violent Extremism (CVE), which has been criticized for being pseudoscientific, ineffective, and discriminatory.
Byrne’s concerns grew as Meta shifted towards profile-based detection to predict dangerous behavior among users. The company aimed to detect risky interactions between dangerous individuals and potential victims vulnerable to radicalization without reading their messages. While Meta confirmed the existence of the malicious actor framework, Byrne feared that any predictive system could be biased and lack sufficient evidence, potentially targeting innocent users.
The company’s pursuit of a system to predict future threats based on thin evidence raised red flags for Byrne, who believed it could lead to biased outcomes. Despite raising her concerns in meetings, they were not taken seriously, leaving her worried about the potential impact on vulnerable users.
As Meta’s internal teams analyzed Facebook profiles to identify indicators of risk, Byrne criticized the methodology used, particularly the background of the investigators involved. Many of them had previous roles in government agencies known for their biased approaches to counterterrorism, raising concerns about their ability to determine what users could say online, regardless of their location.
Throughout her interviews, Byrne emphasized that her criticism was directed at Meta’s systemic issues rather than individual colleagues. She believed that implementing a system rooted in inference could ultimately put the users she was hired to protect at risk. She expressed concern about systemic biases, noting that the Arabic language was not well represented in their data set.
She also worried about the tendency to generalize one form of violent extremism and apply it to vastly different cultures, contexts, and ideologies. She found it illogical to equate groups like Hamas and the KKK without any basis in history or research. Byrne also struggled with defining terms like “misinformation” and “disinformation” as they were too simplistic to capture the complexity of the issues at hand.
As her time at Meta progressed, Byrne became disillusioned with her work. She realized that her efforts were not making Facebook safer and that she was not effectively combating extremism. This realization led her to feel like just another employee in a mundane tech job.
Her mental health deteriorated, leading to a leave of absence and partial hospitalization. During this time, Byrne grappled with the ethical implications of her work and ultimately decided that she could not continue in her role. She left Meta shortly before events unfolded that would implicate her former employer in controversial actions.
Byrne’s experience at Meta made her question the concept of content moderation. She recognized the need to prevent violent groups from using platforms like Facebook to organize but felt that there were better ways for Meta to address these issues. She suggested improvements such as more human moderators and policies tailored to the diverse global user base of Facebook and Instagram.
Ultimately, Byrne and her colleagues felt more like reactive responders to negative press rather than proactive agents of preventing harm in the world. This cycle of responding to media scrutiny left them feeling like a cleanup crew rather than impactful contributors to safety and security.
Byrne remembers being glued to the computer, with her boss’s boss or even Mark Zuckerberg searching things, screenshotting them, and sending them to the team, questioning why certain content was still up. Surprisingly, her team would express gratitude for media leaks as it provided them with resources and the ability to make necessary changes.
While acknowledging the real threat posed by militant neo-Nazi organizations and other violent groups organizing on online platforms like Facebook, she highlights the restrictions imposed on pro-Palestinian speech by companies like Meta, while allowing glorifications of Israeli state violence to go unchecked.
Post-Meta, Byrne has become active in pro-Palestinian protest circles, criticizing her former employer’s role in suppressing speech about the ongoing Israeli bombardment of Gaza. She has given presentations on Meta’s censorship practices and provided advice on how to avoid being banned on Instagram.
She questions the establishment’s double standards in labeling non-state actors as terrorists while condoning similar behaviors from governments, especially when it comes to white, state-perpetrated violence versus brown and black non-state-perpetrated violence.
In contrast to previous Big Tech dissidents like Frances Haugen, Byrne believes that her former employer cannot be reformed through algorithm tweaks or increased transparency. She fundamentally opposes an American company policing speech worldwide, even in the name of safety.
As long as U.S. foreign policy and federal law dictate what acts of violence are acceptable based on politics rather than harm, and as long as Meta complies with these laws, Byrne believes the system cannot be fixed. She is concerned about military, Department of State, and CIA personnel enforcing free speech.