close_game
close_game

AI may replace search engines. Is that good?

Apr 05, 2023 08:55 PM IST

The internet’s great promise was democratisation through decentralisation. If AI chatbots replace traditional search, it may hasten the end of this promise

It’s hard to think of almost any tech advancement that caught the global imagination as quickly as generative AI, especially ChatGPT. These chatbots interact with users in a conversational manner, respond to user queries with succinct synthesised responses, keep track of what was said before, and answer follow-up questions. The result is a human-like interaction, which has shocked and enamoured the world.

Unlike human intelligence, which is built on understanding and creativity, GPT is based on probability, large language models (LLMs), which train on massive amounts of data to learn patterns and predict the most probable sequence of words. (REUTERS) PREMIUM
Unlike human intelligence, which is built on understanding and creativity, GPT is based on probability, large language models (LLMs), which train on massive amounts of data to learn patterns and predict the most probable sequence of words. (REUTERS)

However, there’s one pivotal distinction: Unlike human intelligence, which is built on understanding and creativity, GPT is based on probability, large language models (LLMs), which train on massive amounts of data to learn patterns and predict the most probable sequence of words. Consequently, it will occasionally provide confident responses that are inaccurate and biased. Nevertheless, ChatGPT’s popularity hints at a future where AI bots replace traditional internet search functions — a possibility that reportedly pushed Google to rush the release of its AI response, Bard.

A traditional search engine scours the web for information related to the query and provides a list of links to original sources. The user must then review the information to build a fuller understanding. Based on the quality and relevance of the returned links, the user may try wording her search differently. This iterative process has four components: Exposing the user to new and different sources of information; requiring the user to review individual sources that may contain diverse and contradictory perspectives; discerning between the relative relevance and credibility of individual links; and finally, synthesising the information to build an understanding. On the other hand, an AI-driven chatbot will omit these steps and give a unified and synthesised response, largely removing user discretion and original sourcing from the equation.

This will lead to massive centralisation. All organic search traffic, which some studies estimate to be half of overall web traffic, will get concentrated in the AI chatbot since responses will summarise information instead of directing traffic to individual sources. Even for AI-driven search engines like Bing, which list sources, the nature of “synthesis” implies that many original sources will fall by the wayside. Moreover, it is likely that a disproportionate amount of searches will terminate at the AI chatbot itself instead of continuing to individual sites. It is evident that if this were allowed to happen, it will deprive individual sites of visibility and revenue and render most sites unviable. This could change the internet as we know it because individual websites will need to reconfigure themselves. This also raises serious ethical concerns because companies hosting AI chatbots will profit from original work done by institutions and individuals without (appropriate) compensation.

This centralisation may amplify other dangers, the biggest of which is disinformation. Currently, a handful of organisations create and maintain AI language models, giving them enormous power to control the future of the internet. Since AI is trained to identify patterns, it is possible to control the nature of the outcome by limiting the scope of their training data. It is also possible to limit the nature of responses by non-transparently encoding rules in the model itself. This can lead to biased and incomplete responses, made all the more egregious since the user is unlikely to know of either the omissions (wilful or innocent) or biases. Moreover, since language models do not understand information, but rely on probability, they may be unable to identify disinformation.

For language models trained on open internet data, it may be possible to seed disinformation, which would then be disseminated by the chatbot to unsuspecting users. Compare this to a traditional search engine, which allows the user to explore a diversity of sources and provides the user with valuable input into assessing antecedents and completeness of the sources of information. This is not to say that search engines don’t amplify disinformation today, but arguably that the danger will become more acute with chatbots by removing even more user discretion from the act of consuming information on the web.

Supplanting traditional search with AI may also lead to uniformity of perspectives offline. It is likely that users will accept the information as is, and not sift through diverse sources to make up their minds. This will limit the users’ ability to make independent connections with sources of information and perspectives, and certainly, restrain their ability to think critically.

The internet’s great promise was democratisation through decentralisation. Unfortunately, this promise floundered in the last decade for reasons, such as inadequate oversight, Big Tech monopoly and real-world politics. If AI chatbots replace traditional search, it may hasten the end of this promise.

Ruchi Gupta is executive director, Future of India Foundation.

The views expressed are personal

All Access.
One Subscription.

Get 360° coverage—from daily headlines
to 100 year archives.

E-Paper
Full Archives
Full Access to
HT App & Website
Games
SHARE THIS ARTICLE ON
SHARE
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Wednesday, May 07, 2025
Follow Us On