Google's 🚨 RED ALERT 🚨

If you pay attention to the news you know that Google is panicking over the recent research-level release of ChatGPT, one of Open AI’s Large Language Models. But the roots of this crisis go back years to a fundamental misunderstanding within Google about their core strengths.

Google has been the dominant search engine on the internet for a full generation — they passed Yahoo in 2006, reached 50% of search traffic in 2007, and have since become so ubiquitous that they’re a verb. Their product was, simply, better than anyone else’s: they crawled more of the web, ranked pages extremely accurately, and seemingly could find a result for any query to the point that there’s actually a game based around finding searches that only return exactly one result. Their business model was simple, too: they’d place a single relevant ad or two at the top of the page, and people would occasionally click on them.

Then they forgot what their core product was. As more users became familiar with Google, they started to ask natural language questions rather than using keywords to get the best results. This style of searching, long the domain of librarians and AskJeeves is closer to how people are taught to interrogate the world around them. For folks who were used to only asking questions of other humans, this was entirely natural. Google was good enough at returning the right answer (most of the time) and so people learned to simply type their whole question into Google and look at the top two or three links.

Large Language Models are particularly well optimized for handling this particular way of interacting with humans. They’re designed to parse human language by using a massive amount of existing text to discern the underlying meaning of a question — similar, in fact, to how humans use experience to understand one another. This is exactly what people think Google has been doing this whole time, except that they haven’t been. Google added a ‘Knowledge Panel’ sidebar next to their search results, started generating their own summaries of search results, and inserted ‘best guess’ answers above the most relevant link to try and fulfill this need, but the primary results remain a ranked list of links.

For Google to be threatened by ChatGPT as a knowledge engine, their entry would have to be good. OpenAI spent years comprehensively classifying the world’s information to make sure they have the right answer — and still gets it wrong sometimes. Google pulls from the highest-ranked pages with no evaluation as to the quality of information — with predictable results, notably uncritically repeating conspiracy theories, antisemitic tropes, and other hate. For people to trust a product like this, it needs to be perceived as 100% reliable. And it just isn’t, it’s too easy to fool. A single obviously-wrong result will lead people to never trusting it.

It’s like nobody watched Arthur as a kid or something

Google has confused what they’re good at (page ranking) for being good at something else — they aren’t, and have never been, a good knowledge engine. The risks to Google’s product are closer to home. Even average users (hi Dad!) are noticing that all of the content above-the-fold on a new search is now paid advertising, with the sidebar also mostly containing paid content. Meanwhile, a host of new competition has cropped up and the quality of Google’s ranking has declined as they fight an ever-escalating war against SEO tooling and spammy content. The only red alert I see is that the thing that makes Google all the money just isn’t that good anymore.


For a really great take on some of the underlying problems in Google’s product culture, read Jackie Bavaro’s piece on their (lack of) product strategy.

Thanks to my friend Soroush for inspiring this post, Nick Heer for editing it, and various other folks for helping me refine my thoughts through ahem spirited debate.