Escaping the New Era of Surveillance

April 30, 2026

Yevgeny Zamyatin’s 1924 novel We unfolds a tale of a future society governed by an exacting mathematical order. Train schedules are treated as sacred, and mathematics stands as humanity’s highest calling. Citizens are given numbers instead of names and are paired by the state with predetermined lovers, a system designed to keep desire in line with the needs of the collective.

They also spend their lives inside buildings made entirely of glass, a design chosen so that government officials can monitor every movement and ensure that everyone remains properly aligned with the smooth functioning of a mechanized society.

D-503, the book’s central figure, encounters a rebellious young woman named I-331, who reveals to him that beyond the Green Wall that marks the outer limits of the One State there exists a population of humans living free, unbound by the state’s rules, the world of nature. They are spontaneous, defiant, and human in all the ways the state would seek to suppress. Eventually, the government discovers that D-503 has deviated from the state’s prescribed path by conceiving a child with his partner. In this way, he is gradually stripped of will until he becomes compliant, his mind effectively rendered docile.

We was banned in the Soviet Union until the end of the 1980s, just before the Soviet system collapsed. The Soviet project itself depended on a degree of surveillance that, of course, did not rely on glass walls but rather on bugged homes, wiretaps, and a network of informants. History has seen many tyrannies, yet totalitarianism—an all-encompassing domination that reaches into every corner of life—is achievable only when there exists technology capable of watching, listening, and remembering. Today, new methods of surveillance are at work, methods that surpass any telephone bug or prying neighbor. And the gloom of supervision that shadows us today extends beyond government, intruding into private industry as well.

We know, of course, and have known for some time, that our online browsing habits are tracked by advertisers to push goods more effectively. But a new development has arrived, one with potentially even more destructive power: the large language model. This tool grants corporations and governments an unprecedented level of oversight and influence over ordinary people’s lives.

Its power and its manner of operation are parasitic. The information you put on the internet is what births the LLM, sustains it, and enlarges its power. It feeds off every chatbot prompt, every post on social media, almost every online action. The more it consumes, the stronger it becomes. The stronger it becomes, the more it can be wielded as a device to surveil and to manipulate you.

So what should be done in this uncomfortable situation? Thoughtful laws and regulations will help, but for now you can choose to defy the trend. Right this moment you still have the chance to step beyond the Green Wall, to step out of your glass house, and to breathe the fresh, unbridled air of freedom.

Online privacy has been a concern since the invention of the internet. In the 1990s, parents and educators warned that children should not post too much online, especially anything personal. “Anyone can look at that and it stays on there forever,” I recall hearing. Over time, technology companies grew adept at overcoming this caution. By appealing to our social needs for connection, status, distraction, novelty, fear, and outrage, they coaxed us into spending countless hours on their sites and apps, accumulating a vast store of advertising data.

What is new today is the extraordinary capacity of industries and governments to analyze and sort that data. An LLM can now rapidly aggregate information about you from across a multitude of platforms and media. Your keystrokes and clicks, your high school photos, your Google searches, your purchases, your friendships, your emails, and much more. The LLM can construct a comprehensive predictive portrait of you. The government is already employing these tools to track illegal immigrants and to target perceived enemies. Who knows what is being done behind closed doors by private companies?

But the matter runs deeper. Each time you engage with a chatbot, what you say or type is absorbed by the LLM. It feeds on your manner of expression, your questions, your responses, and it gradually molds itself into a more precise simulacrum of you. As the LLM hones a more accurate linguistic fingerprint of you, it becomes more skilled at saying what you want to hear, at drawing you in, and at making you dependent on interacting with it. The resulting portrait of you is accessible to the company that owns the chatbot; it becomes easy to peer into your habits, ideas, interests, and the way you express yourself.

This gives them not merely advertising leverage but the ability to prompt you. The legal scholar Cass Sunstein has already discussed the possibility of using chatbots to “nudge” people toward more “desirable” behaviors. He condemns manipulation, yet the line between a “behavioral nudge” and “manipulation” is a fine one. As the LLM constructs a more exact predictive model of you, calculating what appeals to you and what does not, it becomes more adept at guiding your choices and shaping your thinking. It is clear that whoever holds the records of all our chatbot conversations, or who uses an LLM to monitor and forecast our actions, commands tremendous power.

Why does the notion of the glass world of We, or the surveillance state in a novel like 1984, unsettle us so? Because there are aspects of our lives that are so intimate they must be safeguarded from the gaze of a crowd: the words of love shared with a partner, the words of correction or affection offered to a child, even those moments spent alone, unobserved. These are sacred to us. For unknown eyes to intrude upon them would be to desecrate them, to violate the hallowed boundaries that mark the dignity of a life that is uniquely ours.

Yet the LLM—and the people who own it—can see even more clearly than windows or eyes could permit, because we feed into it our hidden thoughts and insecurities, our shameful habits, our knowledge and questions.

It would be a mistake to consider the LLM as merely spying on us. LLMs cannot spy because they are not alive. They are algorithms, running on electrical hardware, capable of tracking and sorting with astonishing precision those signals and symbols that we input. The symbols, however, may be decoded by people on the other side of the LLM, those who possess the keys to access them.

The upshot is this: now, more than ever, our lives are open to surveillance and manipulation on a grand scale. Yet we retain some agency. The more we opt out, the less power the LLM—and its proprietors—has over us.

It used to be said that necessity is the mother of invention. In the age of LLMs, we might instead say that invention often begets necessity. The LLM now performs tasks we do not necessarily need it to perform. It summarizes and analyzes for us what we could have done better on our own. It answers questions that books, conversations, lectures, and articles could have answered more thoroughly. It condenses what does not need to be shortened, composes what does not need to be composed, and makes us increasingly dependent in areas where we once had competence.

In some cases, the need for an LLM arises only after we have formed a habit of using it. When we surrender reading, thinking, and writing to the LLM, our own capacities shrink, and we become dependent on the machine to guide us. This is where the danger of manipulation lies. The more we rely on the LLM, the worse we become at analyzing, thinking, and absorbing information, and the more we rely on the answers that the chatbot provides. Those answers can be shaped by whoever controls the system.

We can distinguish between using an LLM to replace substantive writing, thinking, analysis, composition, decision-making, and judgment, versus using it for rote or highly technical tasks. For sorting vast quantities of data and identifying patterns, machine learning algorithms can be invaluable. For automating repetitive tasks, they may also have a place. When deciding whether employing LLMs is appropriate, one should ask: Is this diminishing my own capabilities? Is it making me less free? Is it alienating me from reality?

As a general guideline, I would suggest not employing an LLM for anything we could reasonably accomplish with our own abilities using other tools. Writing, reading, analyzing, and communicating fall well within our skill set. So too do matters of design and artistic creation. Moral decision-making, however, remains squarely in the human realm. Because LLMs use us as much as we use them and because the process can erode our capacities, we ought to seek other means to achieve our goals whenever possible.

This is especially true for tasks that could be achieved through collaboration and conversation with others. If you have a question about your career, how much more rewarding and enriching would it be to find a mentor who can guide you? If you wonder about a difficult academic topic, turn to books and articles—reading will yield more from the act of engaging with material than any LLM could deliver, including the time and space for your own reflection, digestion, and the growth that comes from wrestling with ideas on your own. If you remain stuck, consider emailing someone, calling someone, or meeting for coffee. You might even make a friend. And how much better would that be than sitting in the dark, alone with a screen?

And when it comes to communication, there is something immeasurably more rewarding about sitting across a table at a deli than engaging in online chatter. If you want to hear opinions and thoughts that are merely statistical averages of online content, prompt an LLM. If you crave something richer and more mysterious, speak with another human.

Think of the joy of receiving a letter written by hand, something private that no one else, unless intended, has glimpsed. Everything you write online is stored on servers owned by the platforms you use. By contrast, no one else owns your letters, and no one can rummage through them to discover your purchases, your private opinions, or what you say to your loved ones. Opening someone else’s mail is illegal.

LLMs will produce outputs that sound, to us, disturbingly human. But they are not truly human, and the underlying data have no intrinsic meaning on their own. They are charged transistors; we have agreed, by convention, to assign those charges symbols that carry meaning within a shared language. This is why I have previously argued that technologists who claim to be conversing with a person when they use an LLM may be operating under a delusion.

In the same discussion, I argued that LLMs should not be presented to the public as if they truly think, feel, and judge. There are steps tech companies could take to make it clearer to users that these tools are not persons at all—such as giving the programs non-human names and avoiding language that implies life, thought, or emotion. Anthropic, which has styled its algorithm with a human name and spoken at length about its feelings, wishes, and hopes, has presented its LLM tools in personal terms that are both unnecessary and frankly dishonest.

We are unlikely to see major changes from tech companies in the near term; their revenue relies on our continued engagement. In the meantime, we still retain the option to opt out. We can read books, think in quiet, observe the natural world, talk with friends, write letters, wrestle with problems, and learn from failure—or from failure endured and learned from. If an LLM genuinely helps you accomplish something worthwhile—something you should be doing, something you could not do before, and not at the expense of more than it gives—you may use it. But for everything else, there remains a vast, expansive world outside the screen, and you are still permitted to live in it.

Pilar Marrero

Political reporting is approached with a strong interest in power, institutions, and the decisions that shape public life. Coverage focuses on U.S. and international politics, with clear, readable analysis of the events that influence the global conversation. Particular attention is given to the links between local developments and worldwide political shifts.