Mark Zuckerberg wants you to be understood by the machine. The Meta CEO has recently been pitching a future where his AI tools give people something that “knows them well,” not just as pals, but as professional help. “For people who don’t have a person who’s a therapist,” he told Stratechery’s Ben Thompson, “I think everyone will have an AI.”
The jury is out on whether AI systems can make good therapists, but this future is already legible. A lot of people are anecdotally pouring their secrets out to chatbots, sometimes in dedicated therapy apps, but often to big general-purpose platforms like Meta AI, OpenAI’s ChatGPT, or xAI’s Grok. And unfortunately, this is starting to seem extraordinarily dangerous — for reasons that have little to do with what a chatbot is telling you, and everything to do with who else is peeking in.
This might sound paranoid, and it’s still hypothetical. It’s a truism someone is always watching on the internet, but the worst thing that comes of it for many people is some unwanted targeted ads. Right now in the US, though, we’re watching the impending collision of two alarming trends. In one, tech executives are encouraging people to reveal ever more intimate details to AI tools, soliciting things users wouldn’t put on social media and may not even tell their closest friends. In the other, the government is obsessed with obtaining a nearly unprecedented level of surveillance and control over residents’ minds: their gender identities, their possible neurodivergence, their opinions on racism and genocide.
And it’s pursuing this war by seeking and weaponizing ever-increasing amounts of information with little regard for legal or ethical restraints.
- Federal law enforcement has indiscriminately arrested and revoked the residency of legal immigrants on the basis of legally protected speech and activism, including a student who was imprisoned for weeks over a newspaper op-ed. President Donald Trump’s administration has demanded control of academic programs at top universities and opened investigations into media companies it accuses of prohibited diversity initiatives.
- Secretary of Health Robert F. Kennedy, Jr. (who has suggested replacing people’s antidepressant prescriptions with rehabilitative work camps) has announced plans to build a federal database collecting records of people with autism, drawing on medical files and wearable device data. A recent Health and Human Services report has also implied autism is to blame for gender dysphoria, part of a larger war on transgender people.
- The Department of Government Efficiency (DOGE) is reportedly working to centralize data about Americans that’s currently stored across different agencies, with the intent of using it for surveillance, in ways that could severely violate privacy laws. DOGE head Elon Musk spent the agency’s early weeks digging up records of little-known government employees and government-funded organizations with the intent of directing harassment toward them on social media.
As this is happening, US residents are being urged to discuss their mental health conditions and personal beliefs with chatbots, and their simplest and best-known options are platforms whose owners are cozy with the Trump administration. xAI and Grok are owned by Musk, who is literally a government employee. Zuckerberg and OpenAI CEO Sam Altman, meanwhile, have been working hard to get in Trump’s good graces — Zuckerberg to avoid regulation of his social networks, Altman to support his efforts for ever-expanding energy infrastructure and no state AI regulation. (Gemini AI operator Google is also carefully sycophantic. It’s just a little quieter about it.) These companies aren’t simply doing standard lobbying, they’re sometimes throwing their weight behind Trump in exceptionally high-profile ways, including changing their policies to fit his ideological preferences and attending his inauguration as prominent guests.
The internet has been a surveillance nightmare for decades. But this is the setup for a stupidly on-the-nose dystopia whose pieces are disquietingly slotting into place.
It’s (hopefully) common knowledge that things like web searches and AI chat logs can be requested by law enforcement with a valid warrant for use in specific investigations. We also know the government has extensive, long-standing mass surveillance capabilities — including the National Security Agency programs revealed by Edward Snowden, as well as smaller-scale strategies like social media searches and cell tower dumps.
We’ve been in a surveillance nightmare for decades, but we’re living through a dramatic escalation
The past few months have seen a sharp escalation in the risks and scope of this. The Trump administration’s surveillance crusade is vast and almost unbelievably petty. It’s aimed at a much broader range of targets than even the typical US national security and policing apparatus. And it has seemingly little interest in keeping that surveillance secret or even low-profile.
Chatbots, likewise, escalate the risks of typical online secret-sharing. Their conversational design can draw out private information in a format that can be more vivid and revealing — and, if exposed, embarrassing — than even something like a Google search. There’s no simple equivalent to a private iMessage or WhatsApp chat with a friend, which can be encrypted to make snooping harder. (Chatbot logs can use encryption, but especially on major platforms, this typically doesn’t hide what you’re doing from the company itself.) They’re built, for safety purposes, to sense when a user is discussing sensitive topics like suicide and sex.
During the Bush and Obama administrations, the NSA demanded unfettered access to American telephone providers’ call records. The Trump administration is singularly fascinated by AI, and it’s easy to imagine one of its agencies demanding a system for easily grabbing chat logs without a warrant or having certain topics of discussion flagged. They could get access by invoking the government’s broad national security powers or by simply threatening the CEO.
For users whose chats veer toward the wrong topics, this surveillance could lead to any number of things: a visit from child protective services or immigration agents, a lengthy investigation into their company’s “illegal DEI” rules or their nonprofit’s tax-exempt status, or embarrassing conversations leaked to a right-wing activist for public shaming.
Like the NSA’s anti-terrorism programs, the data-sharing could be framed in wholesome, prosocial ways. A 14-year-old wonders if they might be transgender, or a woman seeks support for an abortion? Of course OpenAI would help flag that — they’re just protecting children. A foreign student who’s emotionally overwhelmed by the war in Gaza — what kind of monster would shield a supporter of Hamas? An Instagram user asking for advice about their autism — doesn’t Meta want to help find a cure?
There are special risks for people who already have a target on their backs — not just those who have sought the political spotlight, but medical professionals who work with reproductive health and gender-affirming care, employees of universities, or anyone who could be associated with something “woke.” The government is already scouring publicly available information for ways to discredit enemies, and a therapy chatbot with minimal privacy protections would be an almost irresistible target.
Even if you’re one of the few American citizens with truly nothing to hide in your public or private life, we’re not talking about an administration known for laser-guided accuracy here. Trump officials are notorious for governing through bizarrely blunt keyword searches that appear to confuse “transgenic” with “transgender” and assume someone named Green must do green energy. They reflexively double down on admitted mistakes. You’re one fly in a typewriter away from everybody else.
In an ideal world, companies would resist indiscriminate data-sharing because it’s bad business. But they might suspect that many people will have no idea it’s happening, will believe facile claims about fighting terrorism and protecting children, or will have so much learned helplessness around privacy that they don’t care. The companies could assume people will conclude there’s no alternative, since competitors are likely doing the same thing.
If AI companies are genuinely dedicated to building trustworthy services for therapy, they could commit to raising the privacy and security bar for bots that people use to discuss sensitive topics. They could focus on meeting compliance standards for the Health Insurance Portability and Accountability Act (HIPAA) or on designing systems whose logs are encrypted in a way that they can’t access, so there’s nothing to turn over. But whatever they do right now, it’s undercut by their ongoing support for an administration that holds contempt for the civil liberties people rely on to freely share their thoughts, including with a chatbot.
Contacted for comment on its policy for responding to government data requests and whether it was considering heightened protection for therapy bots, Meta instead emphasized its services’ good intentions. “Meta’s AIs are intended to be entertaining and useful for users … Our AIs aren’t licensed professionals and our models are trained to direct users to seek qualified medical or safety professionals when appropriate,” said Meta spokesperson Ryan Daniels. OpenAI spokesperson Lindsey Held told The Verge that “in response to a law enforcement request, OpenAI will only disclose user data when required to do so [through] a valid legal process, or if we believe there is an emergency involving a danger of death or serious injury to a person.” (xAI didn’t respond to a request for comment, and Google didn’t relay a statement by press time.)
Fortunately, there’s no evidence mass chatbot surveillance has happened at this point. But things that would have sounded like paranoid delusions a year ago — imprisoning a student for writing an op-ed, letting an inexperienced Elon Musk fanboy modify US treasury payment systems, accidentally inviting a magazine editor to a secret groupchat for planning military airstrikes — are part of a standard news day now. The private and personal nature of chatbots makes them a massive, emerging privacy threat that should be identified as soon and as loudly as possible. At a certain point, it’s delusional not to be paranoid.
The obvious takeaway from this is “don’t get therapy from a chatbot, especially not from a high-profile platform, especially if you’re in the US, especially not right now.” The more important takeaway is that if chatbot makers are going to ask users to divulge their greatest vulnerabilities, they should do so with the kinds of privacy protections medical professionals are required to adhere to, in a world where the government seems likely to respect that privacy. Instead, while claiming they’re trying to help their users, CEOs like Zuckerberg are throwing their power behind a group of people often trying to harm them — and building new tools to make it easier.