Sandra’s fingers are hovering above the keys, the mechanical clicks echoing in the 4:00 PM silence of her home office. The blue light of the monitor reflects in her glasses, casting a pale, clinical glow over the 14 browser tabs she has left open. She needs to send a feedback email to Kevin. It is a sensitive matter, one involving 44 separate instances of missed deadlines and a general lack of cohesion in his recent reports. It requires nuance. It requires the kind of gentle but firm authority that only comes from years of shared history and mutual respect. Instead of typing the first word, Sandra clicks over to a generative AI interface. She types a prompt: “Write a polite but firm performance review email for a colleague who is missing deadlines.”
What follows is a performance of efficiency. In less than 34 seconds, the machine produces 444 words of flawlessly structured prose. It is grammatically perfect. It is devoid of typos. It is also, as Sandra realizes upon the third reading, utterly soul-crushing. The email begins with the dreaded phrase, “I hope this email finds you well,” a sentence that has become the digital equivalent of a limp handshake. It continues with generic encouragement that sounds like it was written by a committee of HR manuals from the year 1994. Sandra spends the next 54 minutes-double the time it would have taken to write the damn thing herself-deleting the robotic fluff, re-inserting the specific details about the client meeting on the 4th, and trying to scrub the “confident wrongness” from the AI’s tone. It is a new kind of labor: the labor of the curator, the editor of ghosts.
The Human Touch
I am intimately familiar with this specific brand of frustration. Only 4 hours ago, I hit “send” on a high-priority message to my own editor, only to realize with a sinking, hollow feeling in my gut that I had forgotten the attachment. It was a 124-page manuscript, the culmination of months of work, and I had sent a hollow shell of an email. The follow-up email-the “Oops, here it is”-is a uniquely human humiliation. It is a mistake born of haste and heart. But as I sat there, staring at the empty sent folder, I realized I would much rather deal with my own clumsy errors than the polished, vacuous errors of a machine. My mistake had a signature. It had a heartbeat. The AI’s mistakes, however, are a form of gaslighting. They look so right that you start to doubt your own reality, spending 24 minutes wondering if maybe “synergistic alignment” is actually a thing a human would say.
The Lighthouse Keeper’s Wisdom
Simon R.-M. would have no patience for this. Simon is a lighthouse keeper on a stretch of coast where the wind howls at 64 knots on a regular Tuesday. He lives in a world of 4 primary colors: the gray of the sea, the white of the foam, the black of the rocks, and the piercing yellow of the light. He spends his days maintaining the 14 large glass lenses that focus the beam. If Simon outsourced his vigilance to a proxy, he wouldn’t just be saving time; he would be risking lives.
Knots of Wind
“The machine can tell you if the bulb is on,” he told me during a visit 34 months ago, “but it can’t tell you if the fog is thick enough to swallow the sound of the horn. You have to be there to hear the silence.”
Primary Colors
AI Words
The Atrophy of Judgment
This delegation of cognitive labor creates a terrifying form of deskilling. Think about the 144 tiny decisions you make when you write a single paragraph. You choose a word for its texture; you choose a sentence length for its rhythm; you choose a metaphor because it resonates with a memory you share with the recipient. Each of those decisions is a rep in the gym of your own consciousness. When you outsource those decisions to a predictive text engine, your judgment begins to atrophy. You lose the fluency of your own mind. You become a passenger in your own life, watching the scenery go by at 104 miles per hour, unable to grab the steering wheel because you’ve forgotten where it is.
Cognitive Reps
Judgment Atrophy
I see this in the way we handle data. We have 44 different dashboards telling us 44 different versions of the truth, yet we feel less informed than ever. We have outsourced the synthesis of information to algorithms that prioritize engagement over accuracy. We are drowning in signals but starving for meaning. This is why we need to return to systems that enhance, rather than replace, our natural capabilities. Platforms like Brainvex are essential because they focus on the actual architecture of the mind, encouraging a level of cognitive development that cannot be mimicked by a prompt and a response. It’s about building the muscle, not just buying the trophy.
The Translator’s Burden
There is a specific kind of exhaustion that comes from editing AI. It’s the exhaustion of a translator who is trying to turn a dead language into a living one. You look at a sentence like “We strive for excellence in all our endeavors” and you have to find a way to make it sound like it came from a person who actually cares about the project. It’s a 34-step process of deconstruction and reconstruction. It would have been faster, cleaner, and more honest to just say, “Hey Kevin, this isn’t working, and here’s why.” But that would require us to own the tension. It would require us to step out from behind the curtain of the “professional” proxy.
Editing AI Process
34 Steps
The Four Stages of Digital Decay
Consider the 4 stages of this digital decay. First, we use the tool for inspiration. Second, we use it for structure. Third, we use it for the draft. Finally, we use it to think. By the time we reach the 4th stage, we are no longer the authors of our own outcomes. We are merely the ones who hit the “generate” button. We have traded our agency for a false sense of efficiency. We save 24 minutes of writing only to spend 34 minutes in a state of existential dread, wondering why our relationships feel so transactional and our work feels so hollow.
Stage 1: Inspiration
Stage 2: Structure
Stage 3: Draft
Stage 4: Thinking (Lost)
The Logbook of Presence
Simon R.-M. once showed me a logbook from the year 1894. It was filled with 444 entries, each one written in a meticulous hand. Some were just notes about the weather, but others were deeply personal reflections on the isolation of the tower. He didn’t have an AI to draft his thoughts. He had his own mind and the 14 hours of darkness each night. Those entries weren’t just data points; they were the record of a human being remaining present in his own life. When he made a mistake-a blot of ink, a misspelled word-he didn’t hide it. He crossed it out and kept going. That smudge of ink is more valuable than 10,004 pages of perfectly generated AI text because it proves that someone was actually there.
The Crossroads: Light or Reflection
We are at a crossroads where we must decide if we want to be the light or the reflection. The proxy is tempting because it promises a world without friction, a world where we never have to feel the sting of a poorly phrased sentence or the embarrassment of a forgotten attachment. But friction is where the heat is. Friction is where the growth happens. If we continue to outsource our thinking to systems that don’t understand the question, we shouldn’t be surprised when we find ourselves living in a world where nobody understands the answer.
Embrace the Imperfect You
It’s time to stop the 44-minute editing sessions. It’s time to close the 4 extra tabs and look at the blank screen until it starts to look back. It’s time to risk being seen. The next time you have a difficult message to send, don’t ask the machine to hide you. Type the words yourself. Forget the attachment if you have to. Make the 14 mistakes that make you who you are. The person on the other end isn’t looking for a perfect proxy; they are looking for you.
