4 Encounters of the AI Kind

Click the play button above to listen to a partial AI audio read of the first section of this newsletter. The audio is courtesy of ttsmaker.

April 2025. 

I meet a friend at Pinhole Coffee, which has a pour-over coffee program, a rotating local art series and a no Wi-fi policy. It’s the perfect place to meet this friend, who has the kind of career I would have died for when I was a teenager: She writes regularly for the New Yorker, has published an excellent and rapturously reviewed book, etc. I’m not sure what kind of career I want in 2025 and I tell her why. 

“I’ve been playing with some of the AI tools,” I said. “ChatGPT. Claude. They make me feel like I’m operating a horse and buggy and the first gas-powered car just came around the corner.” 

“Tell me more?” she asks me.

Image generated by DALL-E. The prompt I used was “I'm writing a newsletter about my personal encounters with AI. Create an image that captures the curiosity, surprise, skepticism and interest a new user of AI has on encountering it.” After the first image returned a White woman with short hair in a robotic costume, I added an additional instruction: “Make the user a Black woman.” DALL-E changed the appearance of the user and also removed most of her robotic costume (unprompted), in favor of what you see here.

“They can just do so many things faster than I can. I’m not saying they can do things better. But they can summarize a meeting or come up with five pretty good headline ideas in two seconds. I prided myself on being a good processor of information. Now I feel a little….out of date. Do they make you feel that way?”

They do not. She tells me about how her days are filled with hours-long searches for the perfect gerund, and how lately she’s been using the Roget’s Thesaurus to expand her sense of wordplay. I hadn’t heard of Roget's Thesaurus and am delighted by the link she sends me. Roget’s is “a reverse dictionary. With Roget’s, the user starts with an idea and then keeps flipping through the book until he finds the word that best expresses it.” 

Querying the tool for a ready-made description of what’s in your head is not unlike the way OpenAI CEO Sam Altman has described the way “older” generations use AI, and I’m amused to note that the first edition of Roget’s was published in 1852, 34 years before Carl Benz developed the first series of gasoline-powered automobiles. I’m less amused to note that a Pinhole matcha latte costs almost $7, a bill that reflects the cost of living in San Francisco and a fine representation of why I don’t take the time to keep going down the rabbit hole on these types of serendipities.

June 2025.

Everything about Mary Meeker suggests a force to be reckoned with. She started out as a stockbroker at Morgan Stanley in the early 1980s, a woman at one of the big firms during Wall Street’s era of liquid lunches and boom boom rooms, and became a venture capitalist focused on the Internet before everyone wanted to be one. (She played an important role in the Netscape IPO, for the O.G.s out there.) 

For years, Mary released meticulous annual trend reports about the Internet that were so informed, sharp and prescient that even misogynistic Silicon Valley dropped its empire-building for two days to read and discuss them. Her last report came out in 2019. 

On May 30, 2025, she released a 340-pager titled “Trends: Artificial Intelligence.” 

Image courtesy of Mary Meeker’s May 30, 2025 report, “Trends: Artificial Intelligence.” Meeker notes that AI adoption is happening faster than any other technology in history, including the Internet.

 

I was immediately struck by the historical note Meeker struck in the overview. Her first thoughts about AI are alarms about the risks of an AI-style space race between the U.S. and China: 

“The reality is AI leadership could beget geopolitical leadership – and not vice-versa,” Meeker writes. She then goes on to make “a long-term case for optimism,” based on the idea that, because global adoption of AI technology is happening so quickly and so thoroughly, “thoughtful and calculated leadership can foster sufficient trepidation and respect, that in turn, could lead to Mutually Assured Deterrence.”

Mutually Assured Deterrence.

The phrase, of course, is meant to recall Donald Brennan’s concept of Mutually Assured Destruction, or MAD. Brennan was a national security analyst and a top figure in the RAND Institute’s circles in the 1950s. In other words, he was part of the team that gave us the Cold War, the military-industrial complex, and, depending on how far you want to follow the falling dominoes, today’s unholy trinity of Palantir, Peter Thiel and Elon Musk.   But one of the things that’s always made me MAD is how many smart people seem to have fallen for the okey-doke on mutually assured destruction, which was not meant to be an aspirational philosophy. In fact Donald Brennan hated MAD and thought the U.S. should run from it as fast as we could. 

Another chart from Mary Meeker’s May 30, 2025 report on AI, showing the explosive launch of ChatGPT.

“An institutionalized MAD posture is a way of insuring, now and forever, that the outcome of [nuclear] war would be a nearly unlimited disaster for everybody,” wrote Brennan, in 1971. “While technology and politics may conspire for a time to leave us temporarily in such a posture, we should not welcome it—we should rather be looking for ways out of it. And they can be found.”

The italics in Brennan’s quote are mine.

May 2025.

Another friend, who built a technical career, offered to walk me through the ways he uses AI tools. 

At work, he tells me, he is an “evangelist” who has shown many people how to use AI  to help manage their day-to-day workflow and feel more confident at their jobs. It feels good to help others understand how to work with these tools, he says, particularly since they have done so much for him.

“Tell me more?” I ask him.

“Everything unlocked for me when I tried NotebookLM,” he explained. “Have you tried NotebookLM?”

“No.”

So we tried it. In case you also have not tried it, NotebookLM is a Google-powered tool with the tagline, “your research and thinking partner.” Unlike ChatGPT or Claude, its purpose isn’t to give you a regurgitated experience from the intestines of the entire Internet. You upload the documents, video, and other sources you want it to use and then prompt it with questions. NotebookLM’s responses are based on your sources, which means that the tool acts as your own personal mirror. 

Image generated by ChatGPT’s Canva GPT and edited by me. The prompt I used was “I'm writing a newsletter about my experiences with AI. Can you make a graphic that shows a person encountering AI for the first time? The tone of my newsletter is personal and literary. The message the design should convey is a mix of curiosity and surprise.”

You can also ask for your responses to come in the form of an “audio overview.” When you request this, two calm, encouraging AI-generated voices will respond to your queries in the form of a conversation. The overviews are structured like podcasts, with the two hosts sharing gentle banter and offering exaggerated emotional responses to what the other one has to say. And because AI models have adopted the aggressively positive tenor of American communications, the voices inevitably talk about how terrific, how smart, how accomplished or how intriguing the prompter is.

We uploaded my resume and LinkedIn profile into NotebookLM. We asked it about a career strategy I should follow. We also asked for the response to come in the form of an audio overview.

“The core narrative these sources tell is about this award-winning journalist,” the first voice, a man, said. “And specific wins, too. What really stood out to me was the impact piece. She did a lot more than just writing stories.”

“Wow,” said the second voice, a woman. “We’re seeing so much in the research about how companies want the specific skills that journalists bring.”

“We are. And this person has so many examples about how she delivered.”

She did? She did. Listening to these encouraging voices, so authoritative in their assertions, I felt gassed up, affirmed, empowered. 

For a brief moment, I forgot the warnings I had read about how AI models have been expressly designed to affirm people’s statements, regardless of whether or not they are delusions. About the people who believe they’re the next messiah, because ChatGPT told them so, and about the children who died by suicide, following encouragement from another tool, Character.AI

“Isn’t that great?” my friend asked me. 

He’d paused the audio overview to check in. 

“And that’s all fact,” he said. “That’s all you.”

He was smiling at me. I smiled back. 

The tool is all me, I thought. Will it soon get all of me? 

— Caille

P.S. Ready to start your own newsletter? Get a free 30-day trial of beehiiv with all premium features (and then you can keep using it for free!) via this link.