The case FOR AI Content
With the release of the Helpful Content Update emphasizes the importance of human-created content. But why shouldn't some content be created by AI?
Google’s Helpful Content Update (HCU) has revived the conversation around AI content, but common arguments are too black or white. "It's either good or bad." I want to offer a 3rd angle to make the discussion more useful: it depends on the use case. Since AI content tools have become more accessible and usable, I’ve encountered numerous cases that rank very well and are useful to searchers.
To be clear, HCU targets low-quality content (see my translation). The official documentation doesn’t mention the “AI” or “machine-generated content” once, only “extensive automation” (which sounds more like content spinning to me). It seems to be much more related to the Panda algorithm and looks at content quality in general.
However, one reference hints at the point that content should be created by humans (bolding mine):
To this end, we're launching what we're calling the “helpful content update” that's part of a broader effort to ensure people see more original, helpful content written by people, for people, in search results.
Representatives have explicitly stated that AI content is against Google’s guidelines in the past. And, of course, there are the guidelines themselves, which forbid auto-generated content. [x, x]
But why?
Quality over origin
Why does it matter whether content is created by a human or a machine? The only point that should matter is quality and helpfulness. Innovation has helped humans automate processes with better technology throughout history, from the laundry machine to cars and microwaves. Content creation should be no exception, at least for certain types of content.
Content isn’t content. It can fulfill different functions:
Explain
Educate
Describe
Inspire
Answer
Provide context
Provide an opinion
Summarize
Machines can cover some of these functions, but not all of them. The argument that content should be “human-created” implies that humans can simply do certain things much better than machines. They are a kind of quality filter.
But for other use cases, we should absolutely use machine-generated content. Google even recommends automating meta-descriptions for large sites.
A few use cases for AI content:
Meta descriptions
Product/category descriptions
Summaries
Definitions
Transcriptions
There might be more, but why not use machines to create content with low complexity? This type of content isn’t engaging for humans to write about, and machines might do a better job than humans.
I distinguish between 3 types of search content: contextual, deep, and narrative. Contextual content provides context along every small step of the user journey: knowing what to search for, picking the right snippet, understanding what the site offers, picking a product or signing up. Types of contextual content are glossaries, meta-data, “what is” articles, or product descriptions.
Deep content is differentiated and unique. Its main purpose is to explain complex topics or help searchers make complex choices. At its heart, deep content leads to a key realization in the user journey.
Narrative content is not geared at performing in Search itself but helps brands to tell a story. It might attract backlinks when the story is link-worthy, but at minimum helps a company to gain attention (from its target audience). Thought leadership and data stories are part of narrative content.
Google uses AI to create contextual content itself:
Google provides a machine-learning-based summary in Google Docs
Google summarizes city descriptions with AI
Youtube video captions
If it’s okay for Google, why not for the rest of the web?
The evolution of content
Over the last 20 years, content has gone through an evolution:
Companies (mostly publishers) created and shared content one-directionally with consumers
Consumers shared content with each other through social networks
Consumers created and shared content with each other through social networks and platforms
Users either use AI to create content and share it with each other or create content and AI shares it with other users
We are currently entering the 4th step in the evolution through everyone's favorite video platform. TikTok is so sticky because it better understands what humans want than anyone else. Its machine-learning-based algorithm measures every single byte of an interaction, even down to browser keystrokes.
Google also used a superior algorithm to dominate its market. Its ecosystem was built on connecting searches with websites and advertisers. It used the PageRank algorithm to determine the most authoritative sites and deliver the most relevant results. The algorithm evolved and now factors in content, user experience, and user behavior.
TikTok leads us to the 4th step because it doesn’t rely on a page, knowledge or follower graph. It profiles users based on their behavior and matches them with the content they’re likely most interested in. Evaluating all these signals wouldn't be possible with machine learning. As with Google, the algorithm is a competitive advantage.
That poses a lethal threat to any company still caught in steps 1-3 of the content evolution and could be one of the reasons why Google moves against AI content.
Google’s problems with AI content
Other reasons why Google forbids AI content could be:
AI content is often so low quality that it’s equal to spam
An ethical reason
Google doesn’t want websites to gain a competitive advantage through AI content and use Google to build an audience of its own
Fear of a flood of commodity content
It would be hard for a search engine to follow if content could be quickly created and optimized with AI
Google wants to answer high-level questions itself with AI
Whatever the reason is, two truths are present today.
One, AI-generated content can perform well in search if the quality is high. Again, the nuanced take is what kind of content we’re talking about. What works today is mostly functional content.
Two, AI content is not yet easy to scale. Prompt engineering, meaning how to tell an AI tool like GPT-3 or DALL-E what you want, is not as straightforward. ML still needs a lot of guidance and reviews. No one trusts AI content enough yet to blindly publish it. [x]
Another challenge is that AI content tends to become more repetitive the longer a piece of content is, diminishing value for users and making it easy to detect for Google. People are afraid that we’re on the brink of getting a 3,000-word essay at the push of a button. We’re not there yet.
However, as AI content gets better, it will also be available to more people simultaneously, including Google. I predict that there will be a small window of opportunity for companies that can scale AI content the fastest, and then it will be available to everyone. The biggest differentiator will be what inputs (data) companies can use to create content.