As search engines focus on AI, don’t lose sight of what counts with content.
As both Google and Microsoft have made big announcements on artificial intelligence, it’s important marketers don’t get distracted from the fundamentals of search.
- AI is the headline act at Google and Microsoft events – but not without hitches.
- Google confirms it won’t necessarily ‘penalise’ content just because it’s AI-originated – the number one priority remains how useful, helpful and valuable the content is to real people.
- Brands concerned about referral traffic need to balance this concern with the importance of wider relevancy.
This week has seen some big announcements in the world of search and, perhaps unsurprisingly, artificial intelligence was centre stage.
On Tuesday, Microsoft lifted the veil on the latest iteration of its Bing search engine, powered by OpenAI (the people behind ChatGPT). It’s a move that Microsoft believes will dramatically improve the search experience for users and help it wrestle back some market share in search.
Not to be outdone, Google had some announcements of its own at its Live in Paris event on Wednesday where, right at the heart of its announcement, was AI-driven search.
Sharing details of its Bard AI engine, Google spoke of how it would use AI to aggregate information for what it called “NORA” – “no one right answer” questions. In these cases, Bard would respond to queries by bringing multiple sources of content into the search result.
But it’s fair to say that the Google event wasn’t without its hitches. The company’s share price fell by 9% after its own promotional video for Bard returned an incorrect answer to a question about the James Webb Space Telescope. Additionally, whilst Google has insisted that it does intend to drive traffic to referring sites, the idea that Bard will show results as part of the search results page itself has naturally caused some concern from brands who fear that they’re not going to drive traffic from content that they are providing. Indeed, Microsoft CEO Satya Nadella was at pains to point out Bing does fully intend to drive traffic to sites that create the content behind the search results, describing this as a matter of “fair use”.
Google’s other announcement on AI
Something else that Google announced on Wednesday, albeit to somewhat less fanfare, was a statement on how the search engine intends to treat AI-generated content in the future. Given the rise of tools like ChatGPT and the prospect that brands could produce large volumes of content with relatively minimal human input, it was a statement many had anticipated and, perhaps unsurprisingly, the guidance was somewhat non-committal.
The key line in the guidance is perhaps:
Our focus on the quality of content, rather than how content is produced, is a useful guide that has helped us deliver reliable, high quality results to users for years.
So on the face of it, a pass for AI-driven content and indeed, Google acknowledges that there many examples where AI content has its place, but that AI shouldn’t be over-used. That’s a pertinent point when we consider the advances within OpenAI to release tools that, albeit in their infancy, can detect whether content was produced through their AI models, with hidden watermarks also likely to come in the future.
This Google statement on AI shouldn’t be taken in isolation, but in the round with Google’s wider guidance – large parts of which have also changed quite markedly in recent months.
The importance of the added “E”.
Back in December 2022, Google updated its search quality evaluator guidelines and the “Google EAT” guidelines that it introduced in 2018, adding an extra “E” to expertise, authority and trust – “experience”.
The fundamental point with the new Google EEAT guidance is that, as more and more brands produce more and more content that is written with authority and by trusted experts, the bar for determining which content should rank needs to rise. Experience, as in experience with the product or service, is seen as a way for Google to make that distinction. A review of a restaurant by someone who can demonstrate that they have been to that restaurant, is more credible than a review that doesn’t have that same evidence (original photos from the restaurant and of the dishes sampled, for example). Advice on choosing between two different models of car is much more credible if there is evidence of the reviewer having test-driven both models.
And this is a big area where AI-generated content simply cannot compete with human-crafted content. For all that an expert can instruct artificial intelligence to create a body of content, AI simply cannot experience that subject matter in the same way. For all that Google may be suggesting that AI-generated content has a place, that must be put into the context of its wider guidance; that in situations where audiences need to especially trust what they are seeing, we’re a long way from AI being able to compete with expertly crafted, original content.
Relevancy will still matter
The debate around to what extent any AI-driven search results will drive traffic will continue but this isn’t a new discussion. Google has been delivering answers to queries directly in the search results – and using commercial brand content to do that – for some time. The advice has remained to keep generating strong informational content to increase brand relevancy around the topics you have a right to be relevant for.
The brands that want to succeed in highly competitive, high volume search markets will still need the full package of depth, relevancy and EEAT and whilst there will inevitably be temptation for brands to lean on AI in producing that content, it’s important to look towards the direction of travel that is within Google’s wider guidance. In emphasising EEAT, consider what Google is looking for in its desire to deliver trusted search results and an optimal user experience. The additional ‘E’ for experience could be a synonym for many other terms – original, relatable, authentic to name just three.
That is where AI will always struggle. Whilst there is no question that the technology will improve, our tests with ChatCGP have produced some good pieces of content, but also pieces that feel formulaic and do give rise to concerns about duplicate content. On the issue of authenticity, our tests with ChatGPT provided us with an article about a product that we completely made up, and people who have had early access to the new Bing search have reported similar experiences.
Well a very humble Redditor did fact check the answers looking for the quoted studies (the Bing response only linked to 1).
Annnnnd it made a lot of stuff up apparently. pic.twitter.com/3mOfkXqm5g
— Gael Breton (@GaelBreton) February 9, 2023
So what do we conclude from this week’s events?
The extent to which a brand may trust or rely on artificial intelligence will be a commercial decision for those respective brands but, whilst brands have always been able to source low-quality content at volume cheaply before, that content has rarely succeeded in achieving organic visibility in a consistent, sustainable manner. Whilst AI undoubtedly has a role to play, it’s a big risk for any brand to assume that AI-generated content will change that.