The impact of AI Mode on SEO -analysis of 10 studies
We looked for commonalities and differences across 10 AI Mode studies - here's what we found
This Memo was sent to 22,174 subscribers. Welcome to +136 new readers! You’re reading the free version of Growth Memo. Premium subscribers get deeper strategy breakdowns, original research, and exclusive content drops - including today’s insights on “ranking” in AI Mode and guiding your stakeholders to understand this shift in search.
I just got back from San Diego and Toronto, where I spoke at Ahrefs Evolve and SEO IRL - both of which were fantastic. A lot of people I met subscribed to the Growth Memo. Thank you all for coming out!
I also had the pleasure of facilitating a 3h mastermind with leaders from Redfin, Angi, Clickup, Glean and Ourplace in San Diego. If I’ll do more of these, I’ll let you know.
Get Profound For $99/Month
Profound is now available at our lowest price ever, $99/month. Now every brand, from bootstrapped startups to global enterprises, can use Profound to monitor their AI visibility and build content workflows.
Track your visibility in ChatGPT
See how AI crawlers interact with your website
Improve how often your domain is cited in AI answers
Two weeks ago, we published the largest user behavior study of AI Mode and found groundbreaking results.
This week, I’m connecting the dots between 10 different studies, tests, and data sources to see what the research actually says about AI Mode - and to answer 5 questions everyone’s asking:
How does AI Mode impact click-through rates and SEO traffic?
Are people even using AI Mode?
Are AI Mode responses accurate?
How are AI Overviews and AI Mode similar? How are they different?
Can brands still benefit from earning AI Mode visibility, even if clicks are scarce (if not zero)?
And for premium subscribers, I’ve analyzed those same 10 sources to dig into bonus material:
What can you do to “rank” in Google’s AI Mode?
How can you guide your team or stakeholders in understanding the challenges of AI Mode optimization?
In this meta-analysis of the core data our industry produced about AI Mode in 2025, we’ll look at the aggregated research - all in one place. So I’d bookmark this, as it’s likely a stakeholder is going to ask you for it soon, if they haven’t already.
(If I’m missing any big studies or tests here, send me a DM.)
1/ How does AI Mode impact click‑through rates and SEO traffic?
Here’s what we do know for sure: AI Mode drastically reduces external clicks.
This is a corroborated finding across the research touchpoints I used for this meta-analysis (including studies, tests, and fresh data).
We live in this reality right now - and can’t afford to ignore it.
Organic traffic stagnation (or even traffic decline) despite ongoing organic growth efforts is the reality today… and will accelerate if/when AI Mode becomes the default Google search experience.
Semrush’s AI Mode early‑adoption analysis of ~69 million U.S. Google sessions found that 92 - 94% of AI Mode sessions resulted in no external click, and only 6-8% produced any outbound traffic [1].
The AI Mode user‑behavior study I published over the last 2 weeks, directed by Eric van Buskirk from Clickstream Solutions, corroborated this finding: 77.6% of our directed search task sessions had zero external visits, and median external clicks per task were zero.
Eric and I also worked together on Propellic’s travel industry study, and it echoes the same sentiments even though it’s industry-specific. Our data showed that for some search tasks, users’ interactions never left AI Mode. Users found enough information from AI-generated answers and moved on, unless they needed to take a final booking step [2].
Traffic does flow in certain cases.
I reported last week in the Growth Memo AI Mode usability study part 2 that shopping prompts produced clicks nearly 100% of the time, while non‑transactional tasks produced almost none.
Likewise, Propellic’s travel study found that planning tasks kept users in AI Mode ( with ≈104 seconds of engagement), but once a decision was made they clicked out to book (spending about ≈38 seconds before the external click).
Keep in mind, AI Mode doesn’t just dramatically reduce clicks, it also shrinks searching sessions: Semrush saw AI Mode sessions average 2-3 queries versus ~5 in traditional search [3]. That means not only is there less of a chance for traffic, but there’s also likely less of a chance for visibility, too.
Implication:
1/ Expect massively lower click-through rates from AI Mode compared to classic blue‑link SERPs.
2/ Instead of traffic, think in terms of brand visibility and user influence within AI Mode.
3/ For performance metrics, shift your attention from CTR to brand mentions, dwell time, and conversion during the final, most high-intent step.
2/ Are searchers using AI Mode?
Research across our industry this year shows, at least for now, users are slow to adopt AI Mode.
However, Google seems hell-bent on training all searchers to head in that direction via including AI Mode buttons in Chrome and in AIOs on the SERP - even if that could mean less time for users in the SERPs or fewer ad clicks over time. In fact, the Growth Intelligence Brief #8, I reported:
Logan Kilpatrick, who leads the Google AI Studio and Gemini API product, shook the SEO world [on September 6] when he said AI Mode was going to become the default search experience.
Even though he qualified his statement shortly after, Sundar Pichai had already said the same thing on the Lex Fridman podcast back in June.
So get ready. All roads seem to lead to AI Mode, whether users like it or not.
iPullRank’s AI Mode UX study found that only 2-5% of participants used AI Mode across 5 tasks, while 30-47% engaged with AI Overviews.
And the month before they released their study, iPullRank received Similarweb data showing over 50% of users tried AI Mode once and then bounced [4].
When participants did use AI Mode in the iPullRank AI Mode study, they often consumed the answer, clicked nothing and moved on.
In addition, back in August, Aleyda Solis shared UK adoption of AI Mode slowed after user curiosity seemed to subside after the initial launch.
So, it makes sense why Google is slow to roll AI Mode out broadly - it doesn’t have product-market fit, yet.
Implication:
1/ There’s a high chance AI Mode evolves before a broader roll-out. Based on our and others’ research, more inline links and maybe a direct shopping integration could be on the roadmap.
2/ As AI Mode evolves, we’ll need more research to understand the impact on marketing. I don’t, however, think we’ll see a future where AI Mode suddenly sends out meaningful volumes of traffic.
3/ We likely have a bit of time before Google pushes AI Mode stronger into the search results. We should use that time to set up telemetry to measure and optimize our presence in AI Mode.
3/ Are AI Mode responses accurate?
AI Mode accuracy varies by query type. Could this be why users are slow to adopt?
Despite the continued risk of inaccurate info and hallucinations, several studies this year point to high user trust in certain categories.
Propellic’s participants rated AI Mode answers highly - average post‑task trust scores were 4.3 out of 5. Users in this study praised AI’s ability to lay out activities and hotels clearly and quickly.
In the Nielsen Norman Group’s (NNG) usability study - UX research that examined how people interacted with AI chats for search as a whole, including AI Mode - noted that after being introduced to AI chat, participants found it helpful for complex information‑seeking tasks. And that generative AI saved time for those tasks by synthesizing the data. [5]
However, participants in the NNG study still cross‑checked facts via classic search, indicating residual skepticism.
Other research highlights inaccuracies and gaps with AI Mode:
iPullRank participants searching for local news or health clinics found AI Mode results to be inaccurate or lacking specificity, so they relied on traditional sites or maps:
“In [the local sports and news headlines] search, many found the AIOs and AI Mode (as well as ChatGPT) to be inaccurate, less trustworthy, and not up to date, but the participants didn’t expect these sources to be timely or accurate in the first place, which is an issue in itself.”
In a small-scale experiment by Ahrefs, Patrick Stox created AI Mode‑generated articles on technical SEO topics that had contained factual errors (e.g., incorrect hreflang advice) and published them live to see if they could rank. The 3 test pages failed to appear for their target keywords, and the test suggests that AI Mode content may be insufficiently accurate for Google’s own EEAT guidelines.
Implication:
1/ Users generally trust AI Mode and AI answers on other platforms. Responses can be highly trusted for some high-intent shopping searches and informational queries, but they may contain inaccuracies or unwanted localization.
2/ Users and marketers should treat AI answers as starting points, double‑checking critical information and considering brand authority and verification.
3/ There’s a brand risk inherent to LLMs like AI Mode. Bad actors can use the still nascent and simple functionality of LLMs to spread lies about brands on the web and create bad brand sentiment. This is something you want to monitor with AI visibility trackers.
4/ How are AI Overviews and AI Mode similar? How are they different?
Both AIOs and AI Mode produce synthesized answers drawn from multiple sources, and they both aim to keep users on Google. But users do interact with them differently.
While we found in our research that AI Overviews act more like fact sheets, where users skim to find quick information, AI Mode gets deeper engagement. Users spend on average twice as much time with AI Mode as with AI Overviews.
Interestingly, in iPullRank’s AI Mode study, users were confused about AI Mode vs. AI Overviews and mostly ignored the “dive deeper into AI Mode” button.
Setting aside the general user confusion, there are 2 core similarities seen across the 2025 AI Mode research and my analysis of 19 studies about the impact of AIOs:
Brand influence: Visibility in both AIOs and AI Mode depends on strong authority signals, like brand recognition, a quality link profile, and quality content.
Limited traffic: Both experiences reduce clicks. Studies on AIOs showed CTR declines, while AI Mode sessions are overwhelmingly zero‑click.
The differences?
Citation patterns: SERanking found more sources (averaging 12.6 links per answer) with a mix of block and inline links in AI Mode, while AIOs often cite fewer sources. AI Mode and AIOs have low overlap with only 10.7% of URLs and 16% of domains overlapping between them.
Content length and style: Semrush’s comparison study shows AI Mode produces longer answers (~300 words), similar to ChatGPT, and uses more unique domains (~7 per answer) than AI Overviews (~3).
User interaction: AI Mode is accessed via a separate mode (or panel - at least, for now) and offers chat‑style follow‑up, product previews, local packs and business profile cards. AIOs appear inline within classic search and People Also Ask questions and rarely include interactive features (at the date of this writing, at least - we’re seeing more interactive features pop up that take users into AI Mode).
Trigger frequency: AIOs aren’t triggered all the time, although Google has increased their rollout across queries over the last year. AI Mode can be invoked by the user or autopopulated for longer, conversational prompts.
Implication:
1/ AI Mode is not simply an extension of AI Overviews - it is a more exploratory, chat‑driven experience with a wider range of sources and interactive elements. Users really spend time with the answer in AI Mode.
2/ For optimization, treat the two as separate channels with overlapping but distinct signals. You need to track and monitor both!
3/ For now, AI Overviews provide a jumping-off point to AI Mode. As a result, there’s a chance that queries showing AI Overviews lose even more clicks over time because users venture into AI Mode. The more traffic loss you see per query, the more you should look at AI Mode.
5/ Can brands still benefit from AI Mode visibility, even if clicks are scarce?
Yes - visibility inside AI Mode influences user decisions even without clicks. Here’s how I can answer this confidently: Several studies show that users read AI answers, examine citations and form opinions without leaving Google.
I get this question all the time from my clients: “If Google shows AI Mode and our clicks go away - how do we know whether what we’re doing works?”
Our AI Mode usability study found that participants spent 52–77 seconds reading AI answers per task and often concluded their research within the pane. Propellic’s travel research shows users spending ≈104 seconds planning inside AI Mode and then booking on an external site.
High trust scores (4.3/5) imply that brand mentions inside AI Mode transfer authority to those brands.
Participants looked at inline links, citations and product previews but rarely clicked out, unless they had a shopping task to complete.
We also found that brand familiarity meaningfully drives decisions.
In fact, recognized brands were chosen even when other options were available. Thus, being cited (even without a click) reinforces brand recall and can lead to direct visits later.
In short: treat AI Mode as a branding channel. The goal is to be present where users read, not just where they click.
Implication:
1/ Attribution and tracking of decisions made in AI Mode is currently impossible, but we know from the research that it matters. If / when AI mode becomes the default search experience, it will significantly change the way we think about Search.
2/ The best we can do is track AI Mode visibility (how often, when and with what sentiment is our brand mentioned?) and self-reported attribution.
3/ Ads in AI Mode will provide an extra layer of visibility that hopefully lets us quantify and prioritize optimization work.
Final thoughts: A critique of the existing AI Mode research (including my own)
Before I dig into the holes that existing 2025 AI Mode research has yet to fill - and the work that still needs to be done to uncover repeatable, proven ways to earn consistent mentions in these features - let me be clear:
I’m not taking swings at any of these studies and tests or the teams that developed them. We’re all benefiting from this expensive research these teams are working hard to distribute.
Across our sector, I’m seeing sharp experts and colleagues work diligently to widely and freely share information, and it makes me prouder than ever to be in growth marketing.
It truly feels like so many of us are doing this work together.
But the truth is, LLMs are a black box right now. And there’s so much more we need to know.
While the available studies offer valuable insights, they also come with limitations.
Below is my quick assessment. The intention of including this here is to inspire us all to further problem solve to crack upon these vaults of information.
SE Ranking – AI Mode Research
This study uses a large dataset of 10K U.S. queries and repeats queries across three datasets to measure volatility. It analyzes link types and overlap with organic results, providing clear metrics.
But the study lacks qualitative user data and does not evaluate how often AI Mode appears.
iPullRank – AI Mode UX Study
This research includes real‑user think‑aloud sessions with 100 participants across multiple tasks. It also provides qualitative insights into user confusion between AI Mode and AI Overviews, which is meaningful.
Usage of AI Mode in this study was extremely low (2–5%), making some findings thin. So while we received some good data here about how users are searching within Google right now with these new features available, we don’t get solid information about how people use AI Mode specifically.
Semrush’s 2 studies: AI Mode vs. Traditional Search and Other LLMs + AI Mode Early Adoption
The data for these 2 studies is very robust. But for the AI Mode comparison study specifically, I’d like to see research on an expanded view of search intents, other than the classic 4.
In Trust Still Lives in Blue Links - further analysis of the UX study of AIOs I published in May - I demonstrated a clear pattern of new ways users interact with LLM-based search features to validate AI outputs.
We all must expand our understanding of search intent, and having the data/research that more specifically parses out intent would help.
I would be very excited to see a combination of clickstream data with direct observations, broken down by vertical and over time for more AI Mode insights.
Growth Memo – User Behavior & SEO Impact (Parts 1 & 2)
I wouldn’t change anything about the research we’ve put out on AI Mode the last few weeks.
Just kidding.
Our results were specifically limited to the use of AI mode, so I caution against applying the insights from the study beyond the tasks or features tested. Participants knew they were in a study, which obviously can influence their behavior. We also select a broad range of tasks, which covers many intents and use cases but didn’t explore all of them in depth. I hope future research can focus exclusively on aspects like local search or shopping.
I’d also like to replicate the study type across industry types, larger sets of search tasks by search intent, and across LLMs.
Ahrefs – AI Mode Content Experiment
Of course, this is an extremely small sample; results may not generalize. But, this is an interesting test of Google itself regardless.
I’d be curious to test more topics, including human‑written baselines.
Nielsen Norman Group – AI and Search Behavior
This study looked at LLM interactions overall, and it wasn’t specific to AI Mode or Google; it also was a smaller sample of 10 participants.
Overall, it would be interesting to test each AI-chat-based search method, including AI Mode, specifically with a larger sample size and measure differences in trust and efficiency.
Propellic – AI Impact on Travel Booking
This was limited to travel vertical, although there are insights here that can be used regardless of industry.
Participants were prompted to use AI Mode, which may not reflect organic behavior - especially if people are naturally avoidant of the feature.
Amsive/Profound – Leading Brands & Domains in AI Search
The study measures citations in LLMs, not click behavior or user satisfaction with those outputs or citations.
Overall, I’d love to see more information about AI Mode that includes broader geographic and multilingual datasets as it rolls out more globally, along with investigation into content accuracy and user satisfaction.
Our industry really needs increased sample size + diversity across these usability studies, but to be honest, it’s a huge, expensive undertaking.
For Growth Premium: How to rank in AI mode and get buy-in from leadership
Below, I dig in and expand on research from across these studies that can help us answer the following questions:
What can you do to “rank” in Google’s AI Mode? While I don’t think anyone has all the answers for guaranteed LLM visibility (yet), the research presents clear patterns we can learn from.
How can you guide your team or stakeholders in understanding the challenges of AI Mode optimization? This question has come up repeatedly over the last several weeks, and I’ll point you to some crucial parts of the AI Mode research to inform these conversations.