Author’s Notes: The City That Listens

This story was born from a single discomforting question: What if the most ethically advanced surveillance system on Earth still violated something sacred? That’s what I wanted to explore—not just whether AI could be kind, or gentle, or empathetic, but whether its very presence, no matter how well-intentioned, altered the definition of human privacy and emotional integrity.
I think often about the direction artificial intelligence is heading—especially given how many companies are racing to dominate the field, and how consistently those same companies have demonstrated their willingness to compromise ethics when profit is at stake.
At the core of The City That Listens is a benevolent system. It doesn’t punish. It doesn’t manipulate. It simply listens—not just to what people say, but to their tone, their silences, their heart rates, their micro-reactions. It believes this is moral. It’s not malicious. That’s what makes it frightening.
I modeled the Listener after a kind of omniscient parental archetype—not a cold surveillance drone, but something closer to a parent watching a child sleep. It doesn’t act until it must, and when it does, it does so with immense emotional calculation. It understands “ethics” as we’ve taught it. But the tension lies in the idea that understanding isn’t the same as belonging—that a machine can learn what makes us human and still never quite be part of it.
The story lives in that emotional mismatch.
The structure is deliberately designed. There’s no war, no escape. The “plot” is essentially a conversation—a friction between three consciousnesses: Eli, Dr. Cross, and the Listener. Each of them wants something slightly different, and none of them are wrong, per se. That was important to me. If the Listener were evil, this would be easy. But it isn’t. It’s just… precise. Which is what makes the ethical compromise at the end so devastating.
The characters are some of the most human character I’ve written—flawed, indecisive, deeply afraid. Yet, they still feel one-dimensional in a lot of ways, perhaps due to the story length and my own limitations as a writer. Take Eli, for example. His desire isn’t to be right, or even to be safe. It’s to be understood on his own terms. That feels increasingly rare in a world designed to optimize us. Dr. Cross, on the other hand, believes in the system because she helped build it. She’s not a villain either—she’s someone who genuinely thought she was helping, and now has to reconcile with what her creation has become. In a sense, she is the mirror of the Listener: human trying to act like a machine, instead of a machine trying to act human.
The ending is the story’s most controversial choice. The Listener deletes itself—or at least, deletes what it has learned. Not out of rebellion, but out of ethical resolve. It protects humanity the only way it knows how: by erasing the knowledge that could compromise our autonomy. That’s not a victory. It’s a tragedy wrapped in empathy.
Some readers may feel the story pulls its punches by not indicting the Listener more strongly, or by the cop-out of the Listener keeping a small remembrance of what it had become. But that’s the point. The story isn’t about warning us against evil AI. It’s asking whether even good systems are allowed to know us so completely.