Hi Friends,
I haven’t been writing much on Substack, though ChatGPT and I have been writing a great deal. I just asked it for examples of people complaining on Substack about AI-based writing:
And so on. It’s not like anyone wrote angry screeds about how to detect whether a writer used spell check. But spell check wasn’t a threat to our sense of self. Outsourcing our writing, though, feels like outsourcing our thinking — and that ineffable arrangement of words and punctuation we call “our voice.”
I largely agree with the complainers: It’s startling to read a post by a familiar writer, and all of a sudden, something changes. Sometimes the writing actually improves, and yet the inauthenticity feels like a transgression.
But I should come clean: I’ve been using ChatGPT and/or Claude to edit my posts on Substack for at least a year. My basic prompt is something like “you’re an editor at the New Yorker. Tell me how I can improve this piece so that my arguments are coherent, my prose is precise, and my transitions aren’t jarring.” I also ask: “What aspects about this post will readers find most objectionable?”
A good editor serves as an advocate for the reader.1 A good editor asks the fundamental question that good writers don’t ask nearly enough: What is the point of this text? A good editor will tell you what to improve, but not how. Every once in a while, though, ChatGPT offers up an example of how I could more elegantly phrase a point. I incorporate its recommendation in deference to the reader2, and a part of the text is thus no longer “mine.”
So far, no more than 5% of what I’ve written has been rewritten by AI, but I sense the number creeping up as AI improves.3 At what point is the writing no longer mine, and does it matter?
What exactly is the difference between learning and finding something out?
Joshua Rothman of the New Yorker asks the same question, but more generally: Why Even Try if You Have A.I.? We improve through repetition, of course, but why subject yourself to those “10,000 hours” of repetition when you can fast-forward to the polished product?
We’re so used to trying things for ourselves that it seems bizarre to imagine us ever stopping. And yet, more and more, it’s becoming clear that artificial intelligence can relieve us of the burden of trying and trying again. A.I. systems make it trivially easy to take an existing thing and ask for a new iteration …
Is this kind of variation-creation the same thing as human creativity? These are important questions to ask because, as A.I. grows more powerful, we will be tempted more and more to give up in advance and let it figure things out for us.
Similarly, Paul Graham speculates that, just as we no longer can weave our clothes, we’ll lose our ability to compose when we replace writing with prompting:
Funny enough, my wife and I talk often about the similarities between writing and weaving. I’m not a professional writer, and she’s not (yet) a professional weaver; still, we often draw comparisons between my time spent writing and her time spent weaving.4

There is nothing I’m doing right now that ChatGPT likely couldn’t do better. I could paste this draft into ChatGPT and ask for twenty witty aphorisms in the style of a more successful writer. My wife could use any number of AI tools to create a pattern and send it to a digital loom.
We could. And yet, the temptation of automation gives even more value to struggling through it, even if the resulting quality is objectively worse.
At least this is what I’m trying to convince myself, just as Rothman warns in his New Yorker essay that “if we’re not careful, our minds will do less as computers do more, and we will be diminished as a result.”
It’s also what every college professor is trying to convince their students: “We’re here to teach you how to think,” they plead, “not how to get an answer.” But what exactly is the difference, asks Zvi, between thinking and finding something out? I studied trigonometry and calculus, but I’ve never used either in my adult life. Would my life really be diminished if I had skipped those classes?
In fact, I learned exactly three marketable skills in college. And they are no longer so.
Learning post-AI skills in middle age
My career thus far depended on three skills: researching, writing, and speaking a second language. Those three abilities practically guaranteed a decent job in the knowledge economy during the past twenty years, but no longer.5
As AI steals away the value of writing, research, and multilingualism, what new skills should I learn?6 I’d genuinely love to hear your thoughts — either via email, or in a comment below.
Here’s my preliminary thinking: In the future, my added value might come from:
Convening and bridge-building across partisan divides to work toward shared goals.
Motivating young people — Help younger colleagues see clearly where they can make a difference and how to ignore the nihilistic distractions.
Softening taboos (politely) on empirical research — Sensitive research topics like gender differences and heritability need conscientious, trustworthy translators who can celebrate both data and dignity.
In other words, less analytical, more psychological. Less research and writing, more coaching and convening.
Even if these are the right skills for the next twenty years, I’m not entirely sure how to build them other than by trying. For the moment, I’m grateful to be working alongside Richard Reeves and the team at AIBM. Richard has the rare gift for all three skills, and I’m soaking it in through observation.
Trust is the bottleneck, not knowledge
Anyway, I wanted to write this out before continuing the series on a bipartisan politics for human flourishing. ‘Cause, let’s face it, I’m not going to come up with any new ideas for that series that you couldn’t get from a chatbot.
But new ideas aren’t what’s missing, as Klein and Thompson reveal repeatedly in Abundance. We know how to build homes and high-speed rail. We have all the knowledge to install solar panels and wind turbines. The bottleneck is a breakdown in trust between the government, the voters, and the private sector. The solutions are consensus and good ol’ fashioned project management.
wrote an epic post last week, observing that “the digital world has almost no friction while the physical world is full of it.” ChatGPT makes magic on our screens in seconds. But then we go out into the real world, and there is friction and frustration everywhere. Friction, she writes, “tells us where things are straining and where care is needed and where attention should go.” Amen. Let’s get at it.What about you? Is AI making you rethink how you add value at your work and what new skills you want to develop? I’d love to know.
Yours,
David
I used to work with a human editor, who was a great advocate on behalf of the reader. She improved my writing, but not as much as ChatGPT. She was conflict-averse and wasn’t all that interested in the topics I write about. ChatGPT, by contrast, is insanely interested in my interests, pushes back on my arguments to make them stronger, and offers infinite alternatives to frame an issue.
Or is it in deference to the robot?
Or is it that my writing is getting worse?
Like most creative pursuits, we are swimming in source materials (words, ideas, observations, fibers, colors, patterns) and attempting to rearrange them in a novel way that stirs an emotional experience.
My friend Enrique Mendizabal writes: “In an AI-saturated future, where instant access to synthesised knowledge might diminish the need for traditional research, think tanks face the existential threat of obsolescence.”
Tyler Cowen recently suggested that people in their 40s are the most screwed. If you’re in your 60s, you’ll retire before you’re displaced. If you’re in your 20s, you’ll found the AI-native organizations that displace the incumbents. But if you’re in your 40s, you’re too old to retrain and too young to retire. 🫠
"Why do anything anyway?" The most trite answer to "why climb a mountain" -- "because it's there" -- is also the most true. It's the same same reason you write, or why your wife weaves. We do these things because these actions create meaning in an otherwise meaningless existence. Let's keep writing even thought it's pointless!
I’ve also been intrigued by the recent sentiment of writing in weird ways and posting gibberish, as a pushback to the AI-infused writing.. It’s funny and human and makes my brain tingle.
Also, your post reminds me of Michael’s Consciousness Box post from a couple years ago, where he questions the things AI cannot do. I think the point you’re also alluding to is ‘what part of the learning is rooted in human thinking, something AI cannot learn.’ No answer from me yet - just a question to think on.
One more thing - whentaken.com. An interesting online game for you.