Jacobs shared a tension he keeps encountering in conversations with people in the tech world. "I was having dinner with a woman at Google who does human-AI interaction, and she was stressing the story of democratization, that AI is going to empower entrepreneurs to compete with big companies."
But that's only one version. "Another story that I hear just as often is that it is going to be a winner-takes-all world. Because of the positive feedback loop of AI, as soon as one company gets a slight advantage, they can replicate and create a hundred workers to do the same job more quickly, and then those hundred can create another thousand. So it will be a world where it's the 0.01% and the rest of the 99.9%."
Jacobs didn't pretend to have the answer. "I am not enough of an expert, and I don't think anyone is, to know which is the correct one." But he's rooting for the first version: "It does have the potential to create this wonderful scenario where you don't have to have half a million dollars to hire people, that you will be able to use AI as your starting staff."
The entry-level job problem nobody has solved
One of the most striking insights from the session came from a conversation Jacobs had with a startup founder who uses AI for most of his coding.
"AI is poised to replace a lot of the entry-level jobs. But what happens when you need that next step on the ladder? Can that person do the job of being in charge of the AIs without having gone through the boot camp of that entry-level job?"
The founder's solution surprised Jacobs: "He says, let's take a week off where you have to do the grunt work that you would normally offload to AI. Try coding without AI, just so you remember and get a feel for it."
Jacobs framed the open question directly: "How much grunt work should we voluntarily do just to be able to manage the AIs?" For organizations building AI champion programs, this is one of the harder design questions to get right.
Using AI to research an article about avoiding AI
Jacobs was upfront about the irony of the whole thing. "This article was about how I stopped using AI for 48 hours, and yet, in preparation for the article, I used a ton of AI, which I admitted in the article. I tried to be upfront."
The New York Times, he explained, has clear rules: "You're not allowed to let AI write anything for the New York Times, but you are allowed to use it for research. And it was amazing. I used ChatGPT and a couple of deep learning prompts, and the stuff that it found was far beyond Google, because it also searches PDFs."
But the research process taught him something important about how AI works. "AI is very obsequious. It's like a brown noser. It sensed my thesis, and the thesis was AI is everywhere, even where you don't expect it. And it started to really cater to that thesis. It would say, you know, AI is here, AI is there, and I would say, can I see sources? And it would say, well, I might have exaggerated a little."
That experience changed how Jacobs prompts. "I now say, show me sources. I try to say, show me both sides of this issue. Don't just cater to what you think I want." He's particularly fond of a technique he called steel-manning, the opposite of straw-manning: "Stating the opponent's point of view in the strongest possible way. I said, can you steel-man the other side? What are the pros and cons of this, as opposed to what are the benefits? Everything has cons and pros."
What AI can't do: perspective-taking and lived experience
When asked what human qualities will remain essential, Jacobs went straight to the through-line of his career. "One of the theses of all of my stuff is walking a mile in someone else's shoes, experiencing the world from another point of view to learn what their experience is like."
In business terms, he connected it to dogfooding. "You're supposed to experience what the customer experiences, even if you're, you know, you own a dog food company, you should taste the dog food." That kind of perspective-taking, he argued, "is something that AI can't do. Whether that's just an intellectual exercise, or actually going out, and, you know, you own a car wash company, going and putting on a wig and a mustache and going through the car wash to see what it's like."
Jacobs sees his own career as what he called "AI-resistant, not AI-proof." His articles are deeply experiential, first-person. "I go and I try to live according to the Bible. Until the AIs are android-like robots that look and act like humans, that's going to be hard for them to replicate."
But he was honest about the limits: "If it's just a non-fiction article or book, like the history of Russia, I'm not sure that humans will be able to do a better job than AI." His friend Kevin Roose at the New York Times recently ran a quiz where readers compared AI writing to writing by established authors, "and people were fooled. I think people overall preferred the AI writing."