Generative Models (like ChatGPT) in Search: Monetization, Consequences, and Paradigm Shift
There has been much said about the incredible capabilities of generative models like ChatGPT, and now Microsoft is scurrying to put that technology into their products with a rumored $10B investment in OpenAI. They want to be the first mover, to keep their grip on the business software market, and to gain a bigger share of advertising revenue by drawing people to Bing instead of the all-mighty Google.
It’s pretty clear that the technology behind ChatGPT will become a critical part of the search experience in years to come, but not yet. It’s great to have a conversation, to write an email or an essay or some code, to make alterations to existing text, to compile multiple items into a list, or to provide a summary. But you’ll notice only the last two (make a list, summarize a topic) are related to search. That’s my focus for this post.
Search vs. ChatGPT
ChatGPT is not a replacement for search because search is retrieval, and ChatGPT is generative (more detail in my blog post here). Retrieval takes you to something that already exists. It’s like you asking me about Martin Luther King and me sending you some essays about him from my collection. Generation creates something new. It’s like you asking me about MLK, but instead of finding something already written, I do some research and then write a summary of that research specifically for you.
Sounds great, right? ChatGPT allows you to skip the whole research process! You can get the information you need without having to do all the work! Indeed, that is the promise of generative models. The problem is that you don’t know how good the model is at doing its research and summarizing its findings. It could be as good as a Harvard professor…or it could be as good as your neighbor’s third grader*. The other problem is that it’s costly – much more expensive than a search, with some estimating that 1M users of ChatGPT costs OpenAI $8M a month! How will Microsoft make money on it?
Search generates revenue by using paid ads, that usually appear at the top of search results. But ChatGPT doesn’t give you a list of results; ChatGPT gives you a single written response. Will companies pay for advertisements to appear above that paragraph? Not in all circumstances, because if ChatGPT does its job, the user gets what they need from ChatGPT and they’ll never click on the link. A link will only get clicked if ChatGPT’s answer was bad, or if it was only a first step to more information. Let’s look at those two situations.
If ChatGPT’s response was obviously bad, then links are the best alternative. But in many situations it’s quite predictable when links will be better than ChatGPT, and it would be better to not run ChatGPT and instead show the list of links that we’re all familiar with. In this case, there’s no change to monetization: ads still appear at the top of the list.
But there are situations where ChatGPT is an entry point to more information. Let’s say you ask ChatGPT to recommend a travel destination that’s warm this time of year, friendly to English-speakers, and has great snorkeling. Based on your criteria it suggests the Bahamas. Having a link to an all-inclusive Bahamian vacation site would be pretty handy! Except for one thing. Advertisers are used to paying for links when someone types a keyword, like “Bahamas” or “Vacation” but it’s impossible to predict when ChatGPT will pick the Bahamas. Will advertisers pay for a link to the Bahamas when ChatGPT recommends a trip to Hawaii? Probably not. So Microsoft will only show the Bahamas link when ChatGPT recommends the Bahamas.
Power to the People? No, to the Models!
This changes the paradigm. In the search model, the user’s search terms trigger ads, but now it’s ChatGPT’s response that triggers them. Might users be suspicious that the ad is driving ChatGPT’s recommendations? Yes. Will Microsoft adjust the model to favor the best-paying advertisers? Probably. Will that bias the results and hurt companies that pay less for advertising (or in this case, lesser-known destinations, since generative models like ChatGPT favors things that are more common)? Definitely. Will anybody be able to see if any of this is happening? No.
This Changes Everything
This will be a fundamentally different model for advertisers and for search providers and creates a new dynamic for us and how we obtain information. For e-commerce, being part of a ChatGPT response will drive people to you in droves; but if you’re not mentioned, you will be invisible. The same is true for discovering information; as long as we trust ChatGPT we’re getting only one view of the world; sure, it’s the view that is common or probable, but it’s also generic.
The technology behind ChatGPT – generative models based on Large Language Models (LLMs) – is here to stay and is going to revolutionize how we do many things. Most of those things (writing an email, debugging code, summarizing a document) aren’t search. But the search experience will be transformed and splintered, with specialized LLMs serving specific needs. Today’s low-hanging fruit is synthesis (compiling lists and writing summaries) for common topics, but it won’t stop there. Eventually, a sufficiently advanced LLM could replace search, but only if it becomes economically feasible to continually re-train it, to keep up-to-date with new content.
Generative models will change pretty much everything. I expect the following:
- Today’s search portals (Bing and Google) will expand to portals that provide much more than search. They’ll defy categorization, bigger than information portals or conversation portals or creativity portals…maybe AI Portals.
- There will be an explosion of apps (and specialized portals) leveraging these technologies to help inform us, make us more productive, assist our creativity, and many things we haven’t even thought of yet
- Information diversity and discovery will suffer as people trust in a single written answer. This will accelerate inequality with an advantage to anyone who can be part of the probable answer.
We’re still at the very early stages of this journey, and the technology is evolving fast. We can’t predict the future, but we know it will be different. My kids can’t imagine a time without access to the world’s information. If AI can synthesize anything and everything, their kids may not be able to imagine a time when access to information was ever even needed.
*Even worse, it will always sound good because it will be well-written…like it came from a graduate student, with no visibility into the quality of the research!