April 3, 2026

Ethan Smith (Graphite): How to Win Citations in ChatGPT, Claude, and Gemini

Graphite CEO Ethan Smith on why Answer Engine Optimisation is almost entirely a long-tail game, and how to win citations in ChatGPT, Claude, and Gemini.
Daan van Rossum
By
Daan van Rossum
Founder & CEO, Lead with AI

Presented by

Byline: Based on the March 27, 2026 Lead with AI PRO live session with Ethan Smith, Founder and CEO of Graphite. Watch the full recording here.

Most of the advice circulating about Answer Engine Optimization is built on short, search-style prompts.

That's a problem, because 60% of AI prompts are ten words or longer, and most marketing tools are not tracking them.

Ethan Smith, whose growth agency works with Netflix, OpenAI, Masterclass, Adobe, and Meta, has spent the last year mapping what actually gets cited in ChatGPT, Claude, and Gemini.

His core finding: AEO is almost entirely a long-tail game, and almost nobody is playing it the right way.

AI usage is four to five times larger than most reports say, and search isn't shrinking

Smith's recent research on AI usage shows the AI market is four to five times larger than published reports suggest.

"Most of the stuff published about AI is only on web, but the majority of the usage is actually on mobile apps," he explained. In the US the dominant app is iOS; outside the US, it's mostly Android.

The other piece of conventional wisdom that does not survive the data: search is not going down:

"Whenever a new category opens up, another category might go down. But in this case, it's not going down."

The parallel he keeps coming back to is mobile apps: when they launched, people said the web would die. It didn't. People just used technology more.

Total combined usage across search engines and AI is up 26% worldwide since ChatGPT launched.

The pie is getting bigger, not shifting.

Answer Engine Optimisation is almost entirely a long-tail game

The single most important insight Smith shared: the query distributions for search and AI are inverses of each other.

In search today, only about 4% of queries are ten words or longer. The long tail, in his words, "in some sense, doesn't even exist in search anymore." In AI, the opposite is true: 60% of prompts are ten words or longer. "It's the inverse distribution. Long-tail answer engine optimization is essentially almost all that answer engine optimisation is."

Why?

Because users can now prompt for things too specific or complex to search for. That's also why the pie is growing: a new set of use cases has opened up that didn't exist before.

The problem is that most marketers are still tracking short, head-term prompts.

"Marketers are not changing," Smith said. "Most of what people are looking at are not representative. They are short search-related prompts."

The opportunity is exactly where it was in the early days of long-tail SEO: if you have the data and others don't, you can exploit it.

Find real prompts in primary sources, not from ChatGPT itself

If the long tail is where almost all AEO opportunity sits, the next question is how to find the real prompts. Smith's answer: don't ask AI to generate them.

"If you ask ChatGPT, guess what the tail is? The answers are very relevant and zero volume," he said. "It's the same with search. If you say, 'suggest some search keywords,' they're really relevant keywords with no volume." Good ideas, no real basis.

Instead, go where the conversation actually happens. Smith demonstrated this with Rippling: he prompted Gemini to pull real Reddit threads about the company, sorted by upvotes, and got a table of questions people were actively asking. "Rippling versus Deel. How bad is the support once implementation is over?" These are questions other people are controlling the narrative on, and they map directly to pages Rippling could build.

The same logic applies to internal data.

"Do you have a Slack channel? Do you have customer support? Do you have sales calls? Go through and mine. What are the questions people are asking about me? That's what's in the tail," Smith said. He described using Claude's Cowork feature with a Slack connector to run exactly this kind of analysis on his own client projects.

AI summarises consensus, so you need to be cited many times across many sites

One of the most counterintuitive things Smith showed was a query for "what's the best website builder for designers" in ChatGPT. Webflow came up first in the AI's answer. But when he looked at the actual sources the AI had pulled, Webflow wasn't even listed.

"AI is summarising consensus," Smith said. "So it's not rank once or rank best. It's rank many times, as many times as possible."

Webflow wins in AI because it's mentioned everywhere else, not because it ranks first in any single result. That reshapes the whole strategy.

"The more people who are mentioning your brand or your product, the better. As much as possible." When asked whether this makes AEO significantly more work than SEO ever was, Smith didn't hesitate: "It's more complex than SEO because SEO is just my site. This is now off-site, and so many off-sites."

The citation mix is also more fragmented than most people realize.

Reddit and YouTube get most of the attention in AEO discussions, but Reddit is only 2.36% of all cited domains and YouTube is 1.65%. "You can't just have a Reddit and YouTube strategy and then be done with it. You need to have a wide net strategy," which Smith described as PR, affiliates, viral content, and content syndication stacked together.

You can measure AI: everything is a probability distribution

A common objection to AEO is that AI is too unpredictable to measure. Smith rejects this framing directly.

"It's kind of like if somebody said, 'I'm going to have a son. How tall is he going to be?' And you're like, 'Well, every human is totally unique. So you could never know.' That's not true. Humans generally are 5'5 to 6'5. There's a probability distribution despite the fact that every human is a unique snowflake."

Every AI answer is a unique snowflake, but the distribution is stable. Smith walked through a study where his team asked ChatGPT for the best flavour of ice cream thousands of times. Vanilla came up most often. Chocolate was second. Some answers, like Thai tea, showed up about 4% of the time. "You can measure AI with probability distributions," he said.

The same principle applies to brand citations. You measure how often you appear and in what position, across many prompt runs. Ranking in AI looks less like a single static position and more like a weather forecast: predictable in aggregate, variable in any single instance.

Flagship AI Newsletter
The AI Newsletter That Makes You Smarter, Not Busier
Join over 30,000 leaders and receive our insights on AI platforms, implementations, and organizational change management.
FlexOS Course - AI Content Accelerator - Testimonial Badge

Allocate roughly 80% of budget to SEO, 20% to AEO

When asked how to split budget between SEO and AEO, Smith's answer was direct: roughly 80/20, with SEO still getting the larger share.

The reason isn't that AEO doesn't work. It's that the pros and cons are different.

AEO's advantages: brand-new companies can do it even without the domain authority to rank in search, and results can come fast. The downside is volatility. "Most stuff doesn't work," Smith said. "I've almost never spoken to someone who said that they hired a Reddit agency and it went well. But if it worked, it would work very quickly. So there's a lot of throwing spaghetti at the wall."

SEO, especially for established companies, is more predictable. "Everything you do in SEO will work for free," he noted. AEO is additive: build the SEO foundation, then layer on product content and off-site activity aimed at the new long-tail prompts. On the money question of whether optimising for one hurts the other, Smith was unambiguous: "There's no conflict at all. It's just an efficient allocation of scarce resources."

AI-generated content usually fails, with one important exception

Smith's team has published research showing that while AI-generated content now outnumbers human-written content in published volume, it only accounts for about 14% of what actually gets cited in Google and ChatGPT.

"It works sometimes, but it doesn't work usually," he said. The underlying reason is what researchers call model collapse: when AI models train on their own outputs, the diversity of human knowledge starts disappearing. "The teams of the model companies want human content. If you have AI content and it's fed into itself, all kinds of bad things happen."

There's one important exception. Smith walked through Sermo, a Graphite client that ranks for Ozempic ratings alongside WebMD and Drugs.com.

Sermo's page has AI-generated summaries, but they're summaries of their own proprietary doctor reviews, not generic AI prose. "It's a derivative of their own internal data, their own UGC from doctors. It's unique, and it doesn't exist on the other results." The algorithm, Smith argued, rewards uniqueness. AI content that's genuinely derivative of original data can work; AI content that's derivative of other AI content cannot.

He also predicted a cultural backlash in about eighteen months. "Similar to the social media backlash, where people say, 'There's too much of this fake stuff that I don't even know if it's real. I want human content.'"

Pick one AI tool and go deep on inputs

The final question of the session was one Smith says he's asked often: how does a CEO of an agency sitting in San Francisco keep up with every new model, every feature, every benchmark?

His answer: don't try. "Most AI advice is that you have to keep up with everything. Look at every model, try every feature, compare all the LMs. That's not actually true," he said. "Pick one of them. If you had to pick one, I would probably pick Gemini, Claude, or ChatGPT. But focus on the inputs and how you're asking the questions."

He described his own practice: connecting Claude Cowork to Graphite's Slack, their internal MCP server, and Ahrefs, then asking structured prompts that stitch together multiple sources to produce answers he actually uses. The trap is asking AI generic questions. "If I just said to ChatGPT, 'answer that question,' it would give me cool answers that are not actually that useful."

The practice that works is closer to real AI fluency: organise the inputs, configure the prompt, and go deep on one tool rather than shallow on ten.

Key takeaways

  • Track long-tail prompts, not short head terms. 60% of AI prompts are ten words or longer, and that's where almost all AEO opportunity lives.
  • Find real prompts in primary sources like Reddit, Slack, customer support, and sales calls. Don't ask ChatGPT what the tail looks like, because it will invent relevant ideas with zero real volume.
  • Build for consensus, not a single ranking. The more places your brand is mentioned, the more likely AI engines are to cite you.
  • Go wide, not narrow. Reddit and YouTube combined are less than 5% of all citations. A real AEO strategy layers PR, affiliates, content syndication, product pages, and integration pages.
  • Allocate roughly 80% of budget to SEO and 20% to AEO, knowing AEO is higher-variance and can produce quick wins or nothing at all.
  • Pick one AI tool and go deep. Configure your inputs, structure your prompts, and stop chasing every new model release.

Ethan Smith is the Founder and CEO of Graphite, a research-driven growth agency working with Netflix, OpenAI, Masterclass, Adobe, Meta, and Webflow. He is an adjunct professor at IE Business School and teaches SEO and AEO at Reforge. This article is based on his March 27, 2026 live session hosted by Lead with AI PRO.

Want to reach 30,000+ business leaders applying AI in their work, teams, and organizations?​
Advertise with us​​.

Want to reach 30,000+ business leaders applying AI in their work, teams, and organizations?​
Advertise with us​​.