Growth Intelligence Brief #6
The impact of Chat GPT-5, Perplexity ranking factors and how Google's Web Guide works
Welcome to another Growth Intelligence Brief, where organic growth leaders discover what matters - getting insights into the bigger picture and guidance on how to stay ahead of the competition.
As a free subscriber, you’re getting the first big story. Premium subscribers get the whole brief.
Today’s Growth Intelligence Brief went out to 400 marketing leaders (+19 since the last issue).
This week, we’re tracking 3 seismic shifts in search and interface control:
The impact of ChatGPT-5
Perplexity ranking factors
How Google’s Web Guide works
I’ll also examine H1 2025 vs. H1 2024 along with March and June Core Updates in the SEO landscape check. Curious about which verticals “won” and “lost”?
You might be surprised…
The impact of ChatGPT-5
ChatGPT-5 saw the light of day on August 7, 2025, replacing ChatGPT-4 and other related models for free and paying users.
Instead of users being able to select from a range of models, ChatGPT offers only GPT-5 and 5 thinking. ChatGPT-5 also uses smart routing, a method where the model itself decides whether to leverage reasoning or other capabilities.
Here’s what happened:
The launch received very mixed reactions:
ChatGPT-5 outperforms ChatGPT-4 by 15-30% in coding and reasoning benchmarks. Hallucination dropped by -80%! [source, source]
Base GPT‑5 jumped from 6.3% to 24.8%, while Pro reasoning mode achieved up to 89.4% on PhD-level science queries, which is well above GPT‑4o’s 70.1% [source]
However, Claude Opus 4.1 outperforms GPT‑5 on SWE‑Bench Verified (74.5% vs ~60%), a leading benchmark for coding capabilities. [source]
The roll-out of ChatGPT-5 was glitchy and many users were missing 4o and o3 models.
Why this news matters:
ChatGPT-5 is not the AGI (Artificial General Intelligence) step-change that many thought it to be.
AGI basically means that AI has the capabilities of humans to think and learn.
We’re not there. Maybe not even close.
That’s good news for us in a couple of ways:
Lower risk of being replaced by AI: While benchmarks are strong, real-world errors still occur - necessitating human editorial oversight.
Stronger models for our work: With deeper reasoning, fewer hallucinations, and larger context windows, GPT-5 can produce more reliable, coherent blog posts, whitepapers, and content narratives.
Dan Petrovic had a good point on this: [source]
OpenAI’s leadership has increasingly signaled a strategic shift toward “intelligence and reasoning” in model weights, while relying on external sources or retrieval for up-to-date knowledge. In other words, OpenAI appears to be designing models that think and reason well, but don’t attempt to internally store all world knowledge – instead leveraging retrieval-augmented methods (tools, search, plugins, large contexts) to pull in fresh information as needed. This approach is motivated by efficiency, cost, and performance considerations, as evidenced by recent statements, research, and product releases.
My take on this:
The fact that it took a lot of resources and time to develop and train ChatGPT-5 for what we’re seeing now makes me hopeful that the AI train slows down and gives us time to adapt.
The higher degree of personalization in ChatGPT-5 tells me that knowing your customer (KYC) deeply is becoming so much more important. That way, we can emulate their LLM experience and optimize for it.
I see a growing divide between performance marketing, where AI matches your ads and creative with users to turn them into customers, and AI visibility optimization, where key to influence users is the right content and web presence so LLMs recommend you as the preferred solution.
But without knowing the customers we want to influence, we “optimize into the dark.”
Here’s what to do:
Two things:
Start gathering as much customer information as you can: demographics, psychographics, where they hang out, how they make decisions.
Use ChatGPT-5 for workflow automation, content generation / refreshing, and ideation.