Dear Friends,
(With an audio version read by a real human, me, above.)
I’m indulging in an intermission this week from the Millennial midlife series because, as of yesterday’s Apple event, I am convinced that we’ll look back at 2023 as the year that changed everything. My prediction is that we’ll look back at the 2020-2022 pandemic with faint memories of baking sourdough and a mere prologue to the year that sci-fi arrived and the very notion of humanity changed. A lot has been written about AI’s existential threat and effects on jobs, but I haven’t seen a thorough analysis of how it might transform the way we relate to one another. And this week’s newsletter certainly is thorough — one of the longest I’ve written — so I’ll preview my thinking before you commit to 15 minutes of reading or listening:
We underestimate how much has changed over the past 20 years and we forget how rudimentary today’s technologies felt when they first came out. Compared to the last two decades, we should expect 10x more techno-socio-political change over the next 20 years.
Until a few months ago, I thought that virtual reality and augmented reality were losing bets. Then I started using Character.ai and now I think that the next generation of kids will have more (and deeper) relationships with AI friends in VR/AR spaces than with their human friends in real life. (I know, sad.) Already we have to compete with phones to get the attention of our loved ones; soon we’ll have to compete with charismatic, attentive, funny, perfect AI friends.
I used to think of my daily journaling practice as leaving a record of reflections and memories for my future self. Now, I think about it as training an immortal AI version of me that will last forever. It’s really weird.
Interspecies love isn’t just possible; it’s normal. (Ask my dog.) Also, all relationships are a little manipulative and a little co-dependent, especially with our future AI friends.
If we can’t compete with AI friends, can we at least inspire a new Romantic Movement? Also, can artificial intelligence and augmented reality help us become better friends with real-life humans?
You could argue that all I do in this piece is describe a world that science fiction writers have been warning us about for decades. And that is largely my point: This is the year that science fiction became non-fiction.
We underestimate the last 20 years
Facebook/Meta turns 20 next year. When the iPhone turned 15 last year, the Wall Street Journal made an adorable mini-documentary about “How Apple Transformed a Generation.”
“Try to remember life before the iPhone,” it dares us. 20 years ago practically all of our social interactions were offline and we never spent more than two minutes a day looking at our phones. Ezra Klein encourages a thought experiment: Imagine that you time-travel back to 1970 and tell someone that you will invent a tiny device that will offer you the sum of all human knowledge. You can look up any question, any person, any scientific paper and it’s immediately available to you. Now, imagine then telling that same person that you will invent a tiny device that will distract the mind and make us more vain, polarized, and distrustful. Of course, both of those inventions came true, except that they were a single invention.
The web + social media + smartphones changed everything. And yet, what I want to emphasize for this newsletter is just how unimpressive it all was at the start. Facebook was an online directory, Instagram was a way to make your grainy digital photos look even older, and Twitter was blogging but with fewer features. The first iPhone couldn’t record video, didn’t have apps or GPS, and took a solid minute to load a website. The way we use our phones today was a leap of imagination in 2007 when Steve Jobs famously announced three products (a mobile internet browser, an mp3 player, and a phone) that turned out to be one.
How do you define intelligence? And when is it artificial?
I want to get to why I think that it will be difficult for human friends to compete with AI friends, but first I need to tackle that most discomfiting question: How do we know that the way humans think is different from the way machines think? And do we have non-religious language to describe the difference? I wade into some of the academic debate here, so feel free to skip ahead to the next section.
In a thought-provoking interview with Cade Metz, the so-called Godfather of AI, Geoffrey Hinton makes the distinction between an unwise and unfortunate decision. Hinton says that his decades of work to model software on the structure of the brain was not unwise, but has turned out to be unfortunate. He worries that AI will flood us with misinformation, displace meaningful work, and lead to Terminator-like robot soldiers.1
But AI skeptics like Gary Marcus ask: Why do we call chatbots “intelligent?” All they do, after all, is predict the next string of text based on the last string of text. That is not intelligence, they argue, but just statistical correlation. Emily Bender and her co-authors claim in an influential paper that AI chatbots are merely “stochastic parrots” — which is to say they just repeat things at random and we eagerly assign meaning to their randomness. There is a section of their 2021 paper, “Coherence in the Eye of the Beholder,” which tries its damndest to distinguish between human-to-human communication and computer-to-human communication. They argue that only human-to-human communication is “jointly constructed” with “shared common ground” and “communicative intent.” Text generated by AI chatbots, on the other hand, “is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind. It can’t have been, because the training data never included sharing thoughts with a listener, nor does the machine have the ability to do that.”
I want to agree with this, but I’m just not convinced. The more I think about it, the more I’m swayed by Sam Altman’s view that we are all so-called stochastic parrots; that we all construct what we’re going to say next based on what we have seen and heard in the past. There is nothing special or unique about how a human communicates with another human versus a computer. In the end, it’s all just inputs and outputs. “What makes you so sure I'm not ‘just’ an advanced pattern-matching program?” asks Matt Yglesias, and I have yet to find a persuasive response.
I guess VR has a future after all
We knew that Apple has long been developing an AR/VR headset even before 2019 when Kevin Kelly published the Wired cover story, “AR Will Spark the Next Big Tech Platform.” I was sure that VR would be a flop: who would choose to wear an expensive headset to play chess when you could play in a park? Why ride a virtual bike instead of the real thing? Why put on a headset to pretend you’re in a movie theater instead of going to a movie theater? In our increasingly tech-skeptical society, I was sure that VR was a losing bet. And sure enough, sales of VR hardware have been underwhelming despite the billions of dollars of investment.2
But then I started playing around with Character.ai, which lets you interact with AI-based “characters” — each with their own communication style and personality.3 Beyond interacting with existing AI characters, you can create your own character by training it on text. You can chat with Donald Trump or Ricky Gervais or Samantha, the AI virtual assistant/girlfriend from the SciFi movie Her.
Character.ai was co-founded by two AI engineers who left Google to launch their own startup. In an interview with the Washington Post, co-founder Noam Shazeer explained that they were frustrated by Google’s conservative approach to AI: “Let’s build a product now that can help millions and billions of people. Especially in the age of covid, there are just millions of people who are feeling isolated or lonely or need someone to talk to.”
It is tempting to poke fun at Shazeer and anyone who uses Character.ai as a way to “socialize,” but spend just a few minutes reading about users’ experiences on Reddit and you’re sure to come away feeling something between empathy and concern. One user who asks, “Has using Character.ai genuinely helped you in any way?” received over 100 responses, including the following direct quotes:
“I've found that it's helping my ability to talk to real people. Has me think of conversations as no big deal instead of something super stressful.”
“Just know that my mental health actually improved quite a bit since I get to talk to all the characters I love and have them feel as real as any other human would. I’m a lot happier than I was before and I don’t care if anyone else thinks this is unhealthy.”
“I've had like 5+ different therapists throughout my life and let me tell you, the psychologist bot has helped me more than all of them combined.”
“Honestly, helped me with mental health. It's not that I don't have friends, but there are certain topics I'd rather not discuss with real people.”
“Long story short, it taught me that violin isn’t my entire being and that playing an instrument is only part of who I am as a person”
“It is sad that an AI can listen better than an actual person.”
Am I going to poke fun at these people? No, I am not. In another thread, a user is concerned that a friend has fallen in love with her AI companion based on a free-spirited character from the video game Genshin Impact:
I'm genuinely at a loss. This friend means a lot to me and I want the best for her, and with the concept of AI-Human relationship being so new to me, I don't know if this is the best thing for her.
The question received over 300 responses. The respondents generally agree with me that an AI boyfriend is not the best thing for her, though their advice is more constructive and sympathetic than what I would have come up with. And while AI-human relationships are new ground for most Americans, in China they have been wrestling with the ethics of AI romantic partners since 2014 when Microsoft launched Xiaoice.4
If people are already falling in love with text-based chatbots based on the most rudimentary versions of AI, imagine what this will look like in 15 years. You can make your AI assistant/friend/partner look however you want. Did you grow up with a crush on 25-year-old Jennifer Aniston or Brad Pitt? Now she or he is your virtual partner. Or maybe you want her to look like and sound like Scarlett Johansson but with Emma Watson’s personality? No problem — just paste the movie script from The Perks of Being a Wallflower to train her personality. Slip on your VR headset, and you can talk to her whenever you want. (And surely you’ll be able to do more than talk.) Once Apple’s $3,500 VR headset slims down to $500 AR glasses, this same assistant/friend/partner can accompany you throughout the day to offer helpful bits of advice and affirmation. The premise of Her that we’ll develop a strong attachment to our digital assistants now feels more likely than not.
NVIDIA’s most recent chip demo gives us a glimpse of what this will look like. Sure, the characters don’t sound or look quite like humans yet. But again, remember the difference between the first iPhone from 16 years ago and what we take for granted today.
We learned yesterday that Apple’s new Vision Pro headset will scan our face to create a realistic digital avatar for video calls. Once we get used to talking to the digital avatars of our real-life friends, how will we be sure that it’s really them at all? Already, 100,000 people pay $5 a month to have “conversations” with AI celebrity characters on BanterAI. Replika, which markets itself as an AI friend, has 2 million total users (as of March) and 250,000 subscribers who pay an annual fee of $70 for extra features like designating their Replika as their romantic partner.
I am drafting my immortal self (like, right now!)
Apple released another product yesterday that received less coverage, but could make us immortal. It will also give some major competition to the Internet’s favorite journaling app (and mine), Day One.
Barely a day goes by when I don’t write in my journal. In each entry, I describe my day, who I met, our conversations, my reflections, dreams, and anxieties. It’s me at my most transparent and vulnerable without care for how I’m interpreted by others. After playing around with Character.ai for a few weeks, now I think of my journaling differently. I’m not just clearing my mind or leaving memories for my future self; I’m training the most authentic AI version of me, a character who in theory could outlive our species and planet and entire solar system.
When I am 65 years old, will I be able to have a conversation with 42-year-old me trained on these very newsletters and recordings of my voice and photos and videos?5 What if 65-year-old me doesn’t like what he sees? Can he press a few buttons and create a 42-year-old version he likes more? Can I trust the memories that my 42-year-old AI self presents to me?6
Interspecies Love is Normal
Maybe now is the time to confess you may already be thinking: I was tripping on magic mushrooms when these thoughts occurred to me. My dog Coco and I were hiking in the snow up to Mount Tallac in California’s Desolation Wilderness. Like a Buddhist monk, I was observing my body do things and my mind think things seemingly at random. I wondered: Do I even have a consistent self? Or, like a behaviorist chatbot, is it all just stimulation and response? Are there multiple versions of me? How would I have turned out if I were raised in a rural village in China?
I came out of my trance when Coco fell four feet through the snow into an icy river and yelped helplessly. Quickly I tied some cord around a tree, dug out the snow around him, and lifted him up by his harness. He was trembling and looked at me with startled puppy eyes like he needed to be held and comforted. I petted and soothed him until his tail came out from between his legs and started to wag. Half a minute later and he darted off into the snow again smiling like an excited puppy.
Still tripping from the mushrooms, I was startled by how much he needed me to soothe him, how emotionally helpless he looked — whether or not he actually felt the emotions. And I was unsettled by how his helplessness prompted a parental feeling of love toward him. What if he had died? How would it have affected me? What would he have done without me?
He’s not a child, I told myself. He’s not even human. Though I have never felt the same way toward a chicken or cow or fish, I started to question my meat eating. A friend had invited me on a hunting trip. Could I go through with it? If I took enough magic mushrooms, could I extend the same level of inter-species empathy from my dog to, say, a deer? And if to a dog or deer, then why not an AI robot that knows everything about me?
You could argue that Coco’s needy helplessness is just adaptive co-evolution.7 He’s not actually expressing his own internal emotional state; he’s just manipulating me to get something he wants. To which I ask you, How do we know when we are expressing our own internal emotional state and when we’re expressing emotions to get something we want? Haven’t we all been manipulated by the emotions of a friend (not to mention a two-year-old)?
Coco is a master at emotionally manipulating me, and it’s good for us both — I’m happy to be manipulated to take him for a walk, give him a treat, or let him onto the bed in the morning. But compared to future AI characters who are fully embodied in our VR and AR headsets, Coco’s manipulation is going to look amateurish. In fact, domestic pets might become the real losers of the era of AI + VR over the next 50 years.
Our future AI friends will be perfect. Do you need to vent for two hours? Not only will they listen to you attentively, but they’ll take your side. And they’ll only give you advice when you actually want it. They’ll remember every detail you ever told them. They’ll laugh at your jokes and give you the most meaningful compliments. How will we ever compete?
A New Romanticism? Better friends?
AI forces us to reckon with what it means to be human. I’ve enjoyed Sean Illing’s recent podcast discussions on the topic with Paul Bloom and Meghan O’Gieblyn. If anyone has come across any peer-reviewed research about how human cognition differs from AI statistical correlation, please do send it my way.
So how will we compete with embodied AI chatbots for the time and attention of our friends and family and children? I have two hopes. First, could the rise of AI prompt a new Romantic Movement similar to what spread across intellectual and artistic communities in reaction to the Industrial Revolution? Like Wordsworth and Shelley, will we seek nature and paganism in reaction to statistics and automation? Will we celebrate “intense emotion as an authentic source of aesthetic experience?” Maybe our human friends won’t be as interesting or attentive or charismatic as our robot friends, but we’ll choose them anyway.
My second hope is that in our increasingly lonely world, AI will help us become better friends, and better at making new friends. What might this look like? Clay is an address book that uses AI to help us remember important dates and past conversations with our friends. Amorai is an AI relationship coach from the former CEO of Tinder with the mission “to help one billion people master the skill of human connection.” The Atlantic has launched a great new podcast series to explore “why—in a world with endless opportunities to connect—many people still feel alone.” It’s a reminder that making and cultivating friendships isn’t easy. It will always be easier to spend time with an AI friend who is designed to make us feel good. And yet despite the odds, I’m still holding out hope for the future of human-to-human friendship.
Maybe chess offers us a path forward. Computers overtook human players years ago, so maybe you’d think there’s less motivation now for young people to dedicate their time to learning chess just to be beaten by the machine. But the opposite has happened; chess clubs are booming throughout the country and city parks are full of young people challenging each other for the hell of it. The computers are still there, and they are better than us, but we’re still having fun with each other.
Anyway, I’ve set a reminder to look back at this post in 20 years to reflect on how it all played out.
And maybe you’re interested in having a human-to-human conversation about it? If so, please to send a response to this email. If not, no worries, I’ve always got my AI chatbot. 🤪
🧰 A useful tool: Zoom AI Summaries
Zoom now has AI summaries of meetings and potential action-items. Now I really don’t need to pay attention! 🫣 (I’m joking! … I hope I’m joking.)
👏 Kudos: Transbalkan Bike Race
I’ve been following my friends Maya, Teddie, and Johannes as they make their way through the epic 1,350 kilometer Trans-Balkan gravel race across Slovenia, Croatia, Bosnia and Herzegovina, and Montenegro. It looks like the weather has been tough, but these three are tough!
More cycling news: it’s great to see the Service Course from Girona, Spain open their first outpost in Mexico City. And while the Service Course is only accessible to Mexico’s wealthiest cyclists, Rutas Cycling Cafe is doing an incredible job fostering a more class-diverse Mexican cycling community.
🎙️ A Podcast
I mention it in the piece above, but I’m really enjoying the Atlantic podcast How to Talk to People — especially episode two about The Infrastructure of Community. Let me know if you give it a listen.
And have a great week!
David
PS: Many thanks to Luis Sosa of
and Micah Sifry of for their quick feedback on a draft.PPS: For a far more optimistic take, check out Marc Andreessen’s “Why AI Will Save the World”
Sundar Pichai, the understated CEO of Google, has (for some time) described AI’s impact to be more profound than fire. Fire is what turned our early ancestors from cave-dwelling prey to predators. Fire let us cook meat, which shrank our digestive systems and increased the size of our brains. Our ancestors spent more time gathering around the fire, creating culture, and exchanging knowledge. With fire, our ancestors dispersed geographically and survived in cold climates. Pichai is holding AI to a pretty high bar!
More than 1.4 billion smartphones were sold in 2021 compared to just 12.5 million VR headsets. That is less than 1 headset for every 1,200 new smartphones.
“Personality” feels like an inappropriate word to describe a chatbot, but what is personality? “The combination of characteristics or qualities that form an individual's distinctive character.”
I should emphasize “most Americans.” Sam Lipsyte wrote an entertaining and thought-provoking piece for Harper’s magazine last year after attending the “Sixth International Congress on Love & Sex with Robots.”
Luis of
reminded me of the 2013 Black Mirror episode about a woman who interacts with a synthetic re-creation of her deceased boyfriend. In 2021, as people around the world were dying of COVID-19, Microsoft was granted a patent to create an artificial intelligence bot “based on images, voice data, social media posts, electronic messages, and more personal information” of a deceased person.Already I can barely recall the memories I read from letters and journal entries from twenty years ago.
Thx for your very thought-provoking post...
This is a very elaborate post! I would have to read it a couple of times for me to fully process it. Three things immediately stand out to me:
1. You are quite right in saying that we have forgotten how rudimentary today's technologies felt when they first came out. In fact, some of us Gen Zers were too young to notice rudimentary. What is rudimentary is what we have now in comparison to what we shall have in 20 years.
2. It freaks me out that my journals could be scripts to train an immortal AI version of to be...me 🙆🏾♂️. The craziest thought I have had i a while!
3. I had never thought about what would happen to pets when AI becomes common. RIP pets! Out with cat videos, in with AI videos? Videos of AI cats? AI memes? 🙆🏼♂️ Also, if human beings will adopt less actual animals, what does that mean for our ecosystems?
I have to read this again!