6 AI SEO Experts Leading the Shift to AI Search

media

6 AI SEO Experts Leading the Shift to AI Search

Here’s a number worth holding onto.

54.5%.

That’s how often the big AI platforms disagreed on which brand to recommend in Aleyda Solís’s 2024 tests. Not different rankings. Different winners. Same query, same product category, same moment in time. ChatGPT picked one company. Claude picked another. Gemini picked a third.

This is the reality of search in 2026.

You’re not optimizing for one algorithm anymore. You’re optimizing for multiple models that weigh signals differently, pull from different sources, and reach different conclusions. The old playbook assumed consistency. The new one assumes fragmentation.

The people below don’t treat this like a problem to solve. They treat it like a system to map. 

If you’re searching for an AI SEO expert who understands fragmentation rather than fights it, these six fit that description. Each found a different entry point. Each built a different practice around what they discovered.

Here’s what they found and how they found it.

1. Oleg Galeev: The Citation Mapper

Late 2024. Google rolls AI Overviews into more countries. Most people speculate about what this means. Oleg starts cataloging where ChatGPT actually pulls answers from.

He runs r/AISEOforBeginners on Reddit. The community tests tactics in real time and posts results publicly. Wins get shared. Failures get shared. No curated highlight reels.

One pattern emerged from those tests. AI platforms didn’t default to the biggest brand names. They pulled heavily from listicles. Roundup posts. Articles that compared multiple options and named winners. Being in those publications mattered more than ranking for the keyword yourself.

Oleg started working backwards. He found listicles already ranking for commercial terms. Pitched clients as experts worth including. Within weeks, those clients appeared in ChatGPT answers. Their own sites hadn’t changed. They just showed up where AI already looked.

He applied the same logic to video. AI pulls from YouTube transcripts constantly. Oleg started building simple channels with ElevenLabs voiceovers. Each video became another place AI could find his clients.

Before the AI wave, Oleg sold two affiliate sites for over a million combined. Both had around 150 posts. He improved existing content instead of chasing volume. That efficiency mindset carries into his AI work and makes him an SEO expert AI practitioners trust for tactical guidance.

  • His Reddit community tests AI visibility tactics in real time with public results
  • His citation mapping shows which publications AI actually cites versus which sound authoritative
  • His video experiments treat YouTube transcripts as citation assets, not just traffic drivers
  • His listicle placement strategy gets clients featured where LLMs already look

2. Aleyda Solís: The Cross-Platform Tester

Aleyda runs Orainti. She also writes SEOFOMO, a newsletter that recently crossed 40,000 subscribers.

In 2024, clients kept asking how they performed in AI search. She couldn’t answer properly because nobody had mapped what “AI search performance” meant across different platforms. So she built the map herself.

She tested 773 identical queries across GPT-5, Claude, and Gemini. Commercial queries. “Best CRM for small business.” “Top email marketing platforms.” Queries where recommendations drive revenue.

The 54.5% disagreement rate emerged from that data. Not variation in ranking positions. Variation in which brands got named at all. Same query. Same moment. Different winners.

This changes how brands allocate resources. Optimize for citations everywhere and you might still disappear from one platform entirely. Aleyda’s testing helps identify where the gaps are.

She also identified something she calls the feedback loop problem. Traffic loss isn’t the real risk. Feedback loss is. Fewer clicks mean fewer behavioral signals. This slows down experimentation. Brands need to create their own insight loops to compensate. Her cross-platform methodology makes her a leading GEO SEO expert for AI results that international brands rely on.

At a Similarweb gathering in London, she brought together SEOs from agencies, in-house teams, and publishers. The conversations revealed something uncomfortable. Many teams make strategic calls based on assumptions rather than actual data.

  • Her 773-query test across three platforms revealed the 54.5% disagreement rate
  • Her cross-platform methodology helps brands understand where they’re visible and where they’re not
  • Her feedback loop research explains why traffic drops compound over time
  • Her newsletter tracks what changes across global search markets

3. Dan Petrovic: The Extraction Quantifier

Dan runs DEJAN in Australia. He approaches search the way engineers approach machinery. Take it apart. Measure the components. Document how everything fits together.

In 2024, he wanted to answer one question. When Google’s AI reads a page, how much of it actually gets used?

Most people guessed. Dan built a methodology. He pulled 7,060 real search queries. Tokenized 2,275 pages. Tracked exactly which sentences and paragraphs got extracted into Google’s AI systems. The final dataset held 883,262 individual snippets.

The numbers told a story.

Google allocates about 2,000 words of “grounding budget” per query. That budget splits across 3 to 5 sources. Position one gets roughly 531 words extracted. Position five gets 266. Drop below the top five, and you’re not in the response at all.

Individual pages cap out around 540 extracted words regardless of total length. Write 1,000 words, and 61% of your content gets used. Write 3,000 words, and only 13% gets used. The extra 2,000 words basically disappear.

This flipped how specialists think about content. Long pages aren’t an advantage. They’re a dilution. A focused 800-word page outperforms a sprawling 4,000-word page because AI actually reads more of it. His findings are cited by nearly every top AI SEO expert in the world working on content optimization.

Dan faced pushback. Some questioned his dataset selection. Others asked about statistical significance. He acknowledged the limitations. Can’t publish client data. Can’t control for every variable. But the findings matched what practitioners were seeing in the field.

  • He built a methodology to track exactly which parts of a page AI extracts
  • His data showed Google caps extraction at roughly 540 words per page regardless of length
  • His position analysis revealed the #1 result gets 531 words while #5 gets 266
  • His work proved that shorter, denser pages outperform longer ones for AI visibility
  • He published imperfect findings rather than waiting for perfect data

4. Cindy Krum: The Entity Architect

Being the founder of MobileMoxie, Cindy Krum has been writing about entity-based SEO for years.

Her current focus is on how AI models understand relationships between entities. Not just what things are, but how they connect. Google’s Knowledge Graph was the prototype. AI Overviews are the full expression.

She tracks how AI handles real-world events. When something happens, how quickly do models incorporate that information? Her research suggests latency varies by platform and topic. Understanding these delays helps brands time their visibility efforts. Her long track record of accurate predictions establishes her as a definitive AI SEO expert 2026 resources point to.

Most SEOs think in keywords. Cindy thinks in concepts. How does AI connect “Apple” the fruit to “Apple” the company to “Apple” the stock? Her frameworks help brands position themselves as the answer, not just a result.

She speaks at conferences globally and publishes consistently on MobileMoxie’s blog. Her podcast features deep conversations with other search thinkers.

  • Her entity-based optimization framework predates the current AI search boom
  • Her research on information latency helps brands understand when new content gets incorporated
  • Her podcast interviews document how other practitioners approach entity optimization
  • Her mobile-first prediction track record demonstrates her ability to spot shifts early.

5. Szymon Słowik: The Ambiguity Researcher

Szymon works from Poland. His agency takaoto.pro focuses on something most SEOs ignore. How language models actually understand words.

Think about the query “best apple.” Google sees three letters. A person means fruit, or computers, or the Beatles’ record label. Context determines meaning. Szymon studies how AI builds that context.

He runs experiments on ambiguity. If you search “best apple” in Warsaw, do you get fruit or electronics? Depends on your search history. But also depends on how clearly the content signals what it’s about. Szymon found that pages establishing entity type early get favored when AI has to choose.

He also tests how format affects extraction. Take a paragraph explaining pricing. Take the same information in a table. AI treats them differently. Tables get pulled whole. Paragraphs get summarized. Lists get itemized. The same data performs differently depending on how you structure it.

His podcast features conversations with other specialists working on AI visibility. Those discussions cover what’s changing and what’s staying the same.

  • His ambiguity research tracks how AI decides between multiple meanings for the same query
  • His format testing shows tables, lists, and paragraphs get extracted differently by language models
  • His entity clarity framework helps brands signal exactly what category they belong to
  • His podcast documents how practitioners across Europe approach AI search
  • His work bridges NLP theory with practical content optimization

6. Jason Brooks: The Context Analyst

Jason runs Wordologists now. Before that, he built UK Linkology into an agency that did things differently from the start.

In 2018, he started tracking something most link builders ignored. Why certain backlinks moved rankings while others with similar metrics did nothing. He pulled traffic data. Mapped semantic relationships. Built spreadsheets comparing link performance across hundreds of campaigns.

That work became the M-Flux formula. It’s not another domain authority clone. It measures actual traffic patterns between linking pages and how topics relate. Jason found that relevance isn’t binary. A page about “SEO tools” passes a different weight to a page about “link building” than a page about “SEO conferences.” Same domain. Different semantic distance. Different ranking impact.

When AI search emerged, he had years of context data ready. He started running experiments on how language models interpret link relationships. Does ChatGPT treat a link from a roundup post differently than a link from a standalone review? Yes. Does it matter if the linking page mentions your brand in the first paragraph versus the tenth? Also yes.

His YouTube channel documents these experiments. Not theory. Screen recordings of actual tests with actual results. Some work. Some fail. He shows both. For practitioners seeking a top AI SEO expert in the US who understands link context, Jason’s research provides the framework.

His current focus is on using generative models to find citation opportunities that traditional tools miss. Instead of keyword matching, he analyzes topical clusters. Instead of domain rating, he looks at semantic density.

  • His M-Flux research maps how traffic patterns between pages predict ranking movement better than authority scores alone
  • His AI prospecting experiments use generative models to surface citation opportunities in topical clusters rather than keyword matches
  • His YouTube channel documents failed tests alongside wins, showing the actual process instead of curated results
  • His semantic density work measures how closely linked pages need to relate for maximum ranking impact

Links still matter in 2026. But the context around them matters more. Jason built the systems to measure that context.

What Pulls These Six Together

December 2025. Google’s AI Overviews cover 100 countries. ChatGPT hits 700 million users. Perplexity processes 500 million queries monthly.

Three platforms. Three different ways of deciding what to show.

Here’s what the numbers actually mean for a business owner. If you optimize only for Google, you disappear from ChatGPT. If you optimize only for ChatGPT, Claude ignores you. If you optimize for all three, you’re guessing because nobody publishes their ranking factors.

The people in this list stopped guessing.

  • Oleg spent 18 months cataloging which publications AI actually cites. He has spreadsheets. He knows that certain listicles get pulled by every major model. He also knows which ones only Gemini uses.
  • Aleyda ran 773 queries through three platforms and watched them disagree 54.5% of the time. She didn’t stop at the headline number. She segmented by industry, by query type, by platform. She knows exactly where the disagreements happen and why.
  • Dan quantified how AI extracts content. He knows that pages under 1,000 words get 61% of their content used. Pages over 3,000 words get 13%. That’s not opinion. That’s 883,262 extracted snippets talking.
  • Cindy mapped how entities connect. She knows why some brands get treated as authorities while others get treated as mentions. It’s not domain authority. It’s entity clarity.
  • Szymon studied how AI handles ambiguity. He knows why “best apple” sometimes returns fruit and sometimes returns computers. Structured data matters. So does topical consistency.
  • Jason measured link context. He knows that a link from a “best SEO tools” page carries different weight than a link from a “SEO conferences” page. Same domain. Different context. Different AI treatment.

Final Thoughts

They all built their own measurement systems because nobody else would. They tracked what actually happened instead of predicting what might happen. They published findings even when those findings contradicted industry consensus.

The 54.5% disagreement rate isn’t interesting because platforms disagree. It’s interesting because Aleyda had to run 773 tests to discover something Google, OpenAI, and Anthropic won’t tell you. The same applies to Dan’s extraction data. Cindy’s entity research. Oleg’s citation maps.

These six built the instruments that measure what AI search actually does. Each approached the problem from a different angle. Each built their own methodology because existing tools couldn’t answer their questions. 

That’s why they’re worth following in 2026. Not because they have answers. Because they built the tools to find them. If you’re looking for the best AI SEO expert to learn from in 2026, start with the one whose questions match your own.