Is ChatGPT a websearch engine?

In a world where artificial intelligence is becoming increasingly prevalent, we often turn to tools like ChatGPT for quick answers. However, this apparent comfort can make us forget that ChatGPT is not a websearch engine...

Pierre Aumont

1/1/20244 min read

By its very nature, ChatGPT delivers a mainstream answer: the thought of the greatest number, since this AI gives a statistical answer based on a large set of data available on the Internet.

This is why we sometimes read that ChatGPT is like a human of average intelligence, with access to all available knowledge.

By its very nature, therefore, it is not a way of processing and sorting a large mass of information in order to come up with a piece of data outside the norm. On the contrary, it provides us with a "normal" response in the social and statistical sense...

It's a bit like being in a crowd and being asked to give an answer out loud. We would hear a shapeless voice giving us the most common thought carried by the most larynxes... but it would be impossible for us to concentrate or distinguish the solitary voices.

The crowd isn't always right. Nor are the lone voices.

It's true that these solitary voices can lead us to misinformation, conspiracy or simple stupidity.

But solitary voices are also those of "weak signals" that are rich in meaning and reflection, that sometimes shake us up and push us to think against ourselves, against our "mainstream" thinking, our collective thinking... and in this they can be very rich, even necessary. Lonely voices aren't just about misinformation and outrage, they're also sometimes about whistle-blowers and singular thoughts.

It's important to hear the voice of the majority, and that voice is crucial in a democracy. But it doesn't have to be the only one we listen to.

By integrating ChatGPT into a search engine, as Microsoft has done by merging BING and ChatGPT in its latest update for February 2023, aren't we running the risk of systematizing the obtaining of "average" answers? It's a risk we need to watch out for, but it would seem that the major web players are taking care to distinguish between the varied responses we're used to, and the single, average response returned by AI. For the time being, we'll find these particular answers in the "conversation" tab. Just as well.

It's true that even today, a traditional search engine produces "mainstream" answers, as these are the ones that will appear on the first pages of results. But with these traditional engines, the data we obtain is sourced, meaning that we have very explicitly not only the beginning of the answer, but also the website from which it comes. We then choose the ones we feel are relevant or legitimate. This is not the case with the unique response provided by an AI like ChatGPT. Not only do we get a unique answer, but we also don't know which sources it has compiled to synthesize its answer for us.

This raises questions.

I hope that future developments in conversational AI will enable us to "visualize" a simplified cartography, or an idea of the statistical "path" the AI has followed to answer.

Let's return to traditional search engines. We can scroll through the list of responses to eventually see results outside the "norm", less mainstream and more singular. It's particularly here that we'll have to judge the data, sorting it with our own knowledge, our own thinking, our own intuition, our own morality, and make our decision, render our judgment. In the end, we're a little freer to choose the answer that suits us or that we "feel" is the right one than when we limit ourselves to a single answer per AI.

Today, ChatGPT doesn't give us its know-how, but simply delivers a ready-to-use result.

Future versions of ChatGPT and consort will enable us to remove its main responses and delve more exhaustively into the data it uses. Will getting out of the "average" answers rely solely on our ability to formulate our queries properly (known as prompts)? Do we need to be a little more rigorous than we are today with traditional search engines? Doesn't the greatest risk come from the fact that we feel we're experts in Internet search because we've been doing it for 10 or 20 years with passive tools?

I think this revolution is pushing us to try and understand the biases of our tools and our own cognitive biases...

We know some of the biases of our tools: we know that Google's initial success was its famous PageRank algorithm, which has now disappeared from search results. We know that these results are also reclassified according to a second, commercial bias, that of advertising. The "AdWords" service modifies the results of our queries according to a "skilful" mix between the companies that pay to appear in the results and the representation that Google has of us, our tastes, our needs and our behaviours, thanks to all the data we offer it on a daily basis.

Physical or algorithmic robots must not be left alone.

They must not be like children left unsupervised on a playground built in the middle of a highway - the information highway.

Again and again, the human conscience will do the right thing. We simply need to understand how these tools work, so that they are neither weapons nor demons, but simple and silly tools for expanding our potential, satisfying our curiosity, and saving us time for our lives in the real world... that of trees and humans.