Dear Friends,
My sister and I had a most excellent sibling adventure in Japan last month. Each destination had more to offer than we had time to take it in. And so we did what one does: We used Google Maps to come up with a list of places to check out. Each restaurant, cafe, and thrift store we visited was seemingly made just for us. It was unsettling to “experience” each place through its ratings, reviews, and photos before experiencing it with our own five senses.
I want to reflect on that unsettling feeling by sharing my experience of discovering a delightful Japanese tavern (izakaya) in Osaka. Mostly, I want to reflect on how I found it in the first place (okay, it was Google Maps) and what it means for the future of how we travel, think about knowledge, and develop taste.
Japan has more restaurants per person than any other country, and I needed to pick one. On a sleeveless Tuesday night, balmy with the promise of spring, I followed the blue dotted line on my phone for eight minutes through a narrow alleyway lit with sepia lamps to an unassuming Japanese tavern. Eight bar stools were arranged around the chef’s grill, and only one was unoccupied. I peeked my head in and with a foreigner’s silent awkwardness gestured toward the empty stool to ask if I could sit. Four beers and much laughter later, my nickname was “California Sasaki” and I was overcome with a drunken desire to eat dinner with my newfound friends every week for the rest of my life.
The chef, Gori-san, served drinks while patiently cooking small tapas-like dishes on the grill. Eventually, he asked how in the world I found his humble eatery. I was surprised to hear the lie come out of my mouth: “oh, I just stumbled across it while I was walking down the street.” He looked amazed; we weren’t in a part of Osaka frequented by tourists. In truth, I never would have found it without the help of Google Maps. Unlike other nearby izakayas with hundreds or thousands of reviews, Gori’s only has 35, and almost all of them are in Japanese. But Google Maps highlighted something that caught my eye, and as I read through the effusive reviews I knew this would be my place.
Predicting versus developing taste
On my short, buzzy walk back to the hotel, I thought about what Google Maps had learned from my visit. It knew that I was traveling alone. It knew that I spent a lot of time carefully searching for dozens of places before settling on this one. It knew that I spent much longer there than I typically do at a restaurant. I didn’t leave a review, rating, or post any photos. But Google Maps doesn’t need my active participation to feed the machine.
In 2018 (only five years ago!), Google Maps transitioned from an app to get driving directions to an infinite directory to explore the world. With every search, visit, rating, and review, Google Maps develops more accurate recommendations. For a short while it even displayed a “match score” revealing how likely you are to enjoy a particular place.
Unlike an ordinary map, which shows us all the same information no matter when we are looking at it, Google Maps is personalized to your preferences. Some of the ways it predicts your preferences are fairly obvious: say, when you save a place as a favorite, leave a rating, or press the share button and share it with a friend. But most of its predictions are based on less obvious measurements: How often do you return to the same place? How much time do you spend there? Is this new restaurant a favorite of people who share your tastes? Does AI detect similarities in the photos with the photos of your favorite places? Is it open during the times when you most like to eat lunch, or drink coffee? Do the reviews include the same keywords as the reviews of your favorite places? Do the reviewers have similar profiles and even personality types?
It doesn’t matter whether Google Maps knows that I am lactose intolerant (though surely it can infer); it knows from my searches and location history not to recommend pizza or ice cream. On the other hand, it’s quick to point out the best sushi, fish tacos, and Vietnamese restaurants with an unfussy ambiance. It even highlights the reviews that mention “unfussy” —a favorite adjective of mine — which are more likely to steer me in their direction.
Google Maps is an unavoidable feedback loop, even if I refrain from leaving ratings or reviews. Every search, click and visit feeds the system and trains it to make better predictions about where I want to go.
Feedback loops and survey fatigue
We are all trapped in feedback loops between the data we produce and the decisions that they inform. In other words: Do Google Maps’ recommendations reflect my tastes? Or do my tastes come from Google Maps’ recommendations?
The concept of “feedback loops,” was coined by mathematicians in the 1940s and 50s as they developed anti-aircraft missile systems. By 2013, the concept was celebrated as the key to democracy, good governance, and personal growth. It was right around that time that I became enamored by “the feedback movement” to deliver on the promise of democracy and philanthropy.
In 2010, Thomas Goetz published The Decision Tree about making decisions with data and a designer’s mindset. This was around the same time that Kevin Kelly and Gary Wolf convened a series of meetups about “the quantified self” to explore how tracking one’s behaviors can lead to constant optimization — now a mainstream idea for anyone trying to close the rings on their Apple Watch or Fitbit, but then a novelty. Goetz’s book included a chapter on feedback loops, resurfacing the concept from the cybernetics movement of the 1950s, and Wired published it as a cover story.
Around the same time, a former World Bank bureaucrat named Dennis Whittle was exploring the next chapter of his career while a fellow at the Center for Global Development. By 2013, Whittle published a framing essay “How Feedback Loops Can Improve Aid (and Maybe Governance):”
If private markets can produce the iPhone, why can’t aid organizations create and implement development initiatives that are equally innovative and sought after by people around the world? The key difference is feedback loops. Well-functioning private markets excel at providing consumers with a constantly improving stream of high-quality products and services. Why? Because consumers give companies constant feedback on what they like and what they don’t. Companies that listen to their consumers by modifying existing products and launching new ones have a chance of increasing their revenues and profits; companies that don’t are at risk of going out of business. Is it possible to create analogous mechanisms that require aid organizations to listen to what regular citizens want—and then act on what they hear?
Whittle launched a new organization, Feedback Labs, to convene research and conversations about the topic. The Hewlett Foundation, where I would soon work, became one of its earliest funders. And my former colleague Fay Twersky became one of the biggest proponents of feedback loops in philanthropy. Even the Obama administration bought into the enthusiasm. In 2013, the Obama White House hosted a workshop to explore “how to improve program effectiveness through participant feedback.” Soon enough, many of Obama’s speeches included the key phrase “close the feedback loop.”
Over time, Feedback Labs developed tools, strategies, frameworks, and a series of workshops to enable its participants to close the feedback loop. It became so easy, cheap, and pervasive that we quickly grew sick of it. Would you be willing to take a brief survey? On a scale of one to ten, how likely are you to recommend this service to a friend? Would you provide us with more information to explain your rating? And will you subscribe to our marketing newsletter while you’re at it?
And yet, the movement for feedback loops became successful, if not inevitable, without attracting mainstream attention. Over the past decade, we have become accustomed to constant requests for our ratings and reviews. We can’t order an Uber without rating our last ride. We dutifully submit our survey responses after every workplace training. We unthinkingly click on thumbs and hearts (or don’t) to give tiny signals of feedback. We “comment for better reach” as a form of feedback. Even a pause in our scrolling (or viewing on TikTok or listening on Spotify) counts as a vote in the Feedback Machine. In a world governed by algorithms, we vote without knowing
Prediction without understanding
I began drafting this newsletter because I wanted to understand how Google Maps brought me to Gori and his mouth-watering okonomiyaki. Instead, I came to doubt the very concept of “understanding.”
I’ve been influenced over the past few months by Noah Smith’s idea that we’re moving into a third era of knowledge that is characterized by “prediction without understanding.”1 The invention of writing marked the first era. For the first time we could pass down our knowledge from one generation to the next, from one place to another.
The second era of knowledge, starting with the Scientific Revolution, moved us from documentation to inquiry, from writing things down to testing how they work. Experimentation enables explanation, which allows for prediction. From evolution to electricity to the internet, it’s remarkable what the second era of science has enabled through the power of experimentation and explanation. But now, Smith argues, we’re on the verge of a new paradigm shift in which big data plus machine learning allow us to predict things without understanding what causes them.
AI gives us accurate predictions without generalizable principles. We can verify its accuracy without understanding its reasoning. For instance, in the 1990s researchers at IBM trained Deep Blue how to explore up to 200 million possible chess positions per second. It was the biggest collection of chess principles ever accumulated. But then, Deep Mind demonstrated a far more effective approach. Rather than training the software how to play chess, they used machine learning to instruct the program to simply play games against itself until it outperformed any existing chess computer or human player. It took just nine hours. As the researchers write, “this ability to learn each game afresh, unconstrained by the norms of human play, results in a distinctive, unorthodox, yet creative and dynamic playing style.” They can’t explain how it works, but it does.
There are endless examples of machine learning outperforming the accuracy of rule-based software, but it’s unnerving that we can’t always explain why it is so accurate (or for that matter, when it isn’t). Smith concludes that we ought to stop conflating science and statistical prediction:
We are almost certainly going to call this new type of prediction technique “science”, at least for a while, because it deals with fields of inquiry that we have traditionally called “science.” But I think this will obscure more than it clarifies. I hope we eventually come up with a new term for this sort of black-box prediction method, not because it’s better or worse than science, but because it’s different.
A delicious meal made just for me
After my third beer, I FaceTimed my sister who was in nearby Kyoto to introduce her to my newfound friends. “So beautiful,” they exclaimed, and I felt my big brotherly protectiveness kick in, even though she was 100 miles away.
Traveling with Google Maps is way better than depending on a handful of outdated recommendations by a single author of an old Lonely Planet guidebook. And yet, I haven’t adjusted (and maybe never will) to the spookiness of making decisions based on recommendations I don’t understand. That no one understands.
🧰 A useful tool
Over the past couple of months, I’ve been alternating my music listening between Spotify’s new AI-based DJ whose voice is modeled on Brooklyn-based Xavier Jernigan, and music from the Oakland-based human being DJ Juan Amador who plays great music from 8-10pm each evening for KALW.
Spotify’s AI DJ has been my spookiest AI experience yet because music is so personal for me and it feels like the DJ knows my eclectic taste incredibly well. And yet, there is something about knowing that Juan Amador is recording live each night at a studio just a couple of miles from my house that makes it feel, in a way, more intimate.
Mashable has instructions on how to set up the AI DJ on Spotify if you want to try it out. And the Wall Street Journal has an interesting deep-dive explainer about how Spotify groups songs together.
Have a great week!
David
Perhaps “knowledge,” which conveys an element of understanding is an inadequate word. Perhaps we haven’t developed the language yet to convey progress without understanding. Smith calls it rather mystically “the third magic.”