Image: Google DeepMind via Pexels

As AI assistants replace traditional search, visibility on the internet is no longer won by appearing high in search results, but by being cited. This shift is quietly rewriting the economics of attention, advertising and trust online.

By: Lydia Liu

Edited by: Alex Oh

How AI Is Rewriting the Web’s Attention Economy

When people look for products, news or quick facts online today, they often no longer browse through lists of links. They type a question into an AI assistant, wait a few seconds and receive a single and well-phrased answer drawn from across the web. The process feels effortless. There is no need to compare sources or open multiple pages, because the AI has already done that work.

This quiet change in behavior is reshaping the structure of the internet itself. For almost three decades, the web’s attention economy revolved around a simple sequence: searches led to clicks, clicks generated ad impressions and those impressions funded the content that people read. Each interaction kept traffic circulating among millions of sites. Now the loop is breaking; when an AI system condenses a dozen pages into a short summary, the click disappears, and so does the flow of visitors that once supported newsrooms, small businesses and entire advertising industries.

A 2025 study by the Pew Research Center found that when Google’s AI-generated summaries appear on a results page, only about 8% of users click any external link, compared with 15% when no summary is shown. Analysis by Semrush, based on more than ten million searches, shows that by March 2025, Google’s AI Overviews appeared in over 13% of U.S. desktop queries—double the rate from January—and that organic click-through rates fell from 1.41% to 0.64% over the same period.

The explanation lies in human preference: AI-generated responses save time and feel more reliable. They are shorter, visually cleaner and phrased in natural language rather than technical fragments. Searching has become less an act of exploration and more a conversation with a single voice. Each zero-click answer removes what used to be a small but vital transaction: an ad impression, a page view or a new reader discovering an unfamiliar site. Bain & Company estimates that about 80% of internet users now receive direct answers on the search page in at least 40% of their queries, reducing available traffic for advertisers and publishers by up to 25%. Attention, once scattered across millions of pages, is beginning to pool inside a few AI platforms. As these systems become the main entry points to information, they also decide what people see. Visibility, which used to depend on ranking algorithms and keyword strategy, now depends on whether the model selects a source for inclusion in its answer. The gatekeeper of the web is no longer the user’s curiosity but the model’s design.

In this new search environment, the meaning of visibility has changed. On the traditional web, attention could be earned through ranking—by mastering keywords, building backlinks and climbing to the top of a results page. In the world of AI search, visibility no longer depends on ranking or position. What matters now is whether a source is chosen to appear within the AI’s answer at all—a new kind of gatekeeping that determines who gets seen and who disappears.

Recent research helps quantify this shift. Analysts at Yext examined 6.8 million citations drawn from major AI search platforms such as ChatGPT, Gemini and Perplexity, and found that 86% of all references came from company-owned or official websites, suggesting that AI systems favor sources with clear ownership and verifiable structure. This means that the AI ecosystem favors information that is clear, structured and traceable to a verified source. The companies that already control their digital infrastructure—those with consistent metadata, well-maintained websites and recognized authority—are the ones most likely to appear in AI-generated answers.

For others, the task is much more complicated. A 2025 study by Xponent21 found that when a brand appears in just 40% of relevant AI answers, its exposure increases by more than seven times compared to what it achieves through traditional search ranking. Yet, reaching that level requires investment in content structure and credibility that many smaller organizations cannot afford. Another dataset compiled by Ahrefs shows a strong correlation—about 0.67—between how often a brand is mentioned by third-party sites and its likelihood of being cited in AI summaries. Together, these findings suggest that the new gatekeepers of visibility are not advertising budgets or viral moments, but the quiet mechanics of data consistency and semantic clarity. The shift is subtle yet structural. Traditional search was a marketplace of competition where every site had a theoretical chance to attract attention through optimization or timing. AI search is more selective. It relies on confidence thresholds, citation reliability and pre-trained preferences built into large models. When these systems decide which sources to quote, they are in effect deciding which voices will shape public knowledge. This process, while efficient, carries a cost: it rewards the already credible and sidelines those without a digital footprint to prove their trustworthiness. For users, this means less active choice in how they navigate the web; for companies, it redefines how engagement and revenue are created.

As AI-powered search tools become the default way people look for information, the foundations of online advertising and media revenue are beginning to crack. Many news outlets and publishers that once relied on referral traffic from search engines now report worrisome declines. For example, a recent survey among major media organizations showed that when AI-generated summaries take over search result pages, publishers experience referral traffic losses ranging from 1% to 25%, with some cases showing even greater declines. The effect is most visible among those whose revenue depends heavily on ad impressions tied to page views. As users receive their answers directly from the AI interface and never click through, the ad-supported model loses its lifeblood. According to a 2025 Digiday report analyzing industry traffic data, organic search referral traffic dropped across the board for both news and entertainment websites, and many publishers described the drop as “steep and steady.” Some outlets saw even sharper declines. One industry analysis found that click-through rates for already top-ranked pages dropped by as much as 34.5% after the rollout of AI Overviews. This dramatic shift undermines the value of ranking first in traditional search results. In some cases, previously profitable content verticals now struggle just to break even.

Faced with this disruption, marketers and content creators can no longer rely on the old formula of optimizing for keywords, getting a high ranking, then driving ads or sales through clicks. Instead, a new calculus is emerging: brands now need to win visibility directly within AI-generated answers. This shift has given rise to a nascent strategy known as Generative Engine Optimization (GEO), which prioritizes structured data, content clarity, and reliability as metrics for being chosen by AI systems. Under GEO, the unit of value is no longer the click, but the citation. An appearance in an AI-generated answer can deliver brand visibility even when no links are clicked. For large companies with established content libraries and strong metadata discipline, this new currency can translate into sustained exposure and possibly higher return on content investment than traditional ads. Smaller publishers and niche sites, however, face a steeper climb: without consistent content infrastructure and external recognition, they risk being excluded from these AI-mediated recommendation loops altogether.

In effect, the economics of online visibility is bifurcating. On one side are the “winners”—big and established players who can afford to optimize for machine readability and invest in comprehensive content ecosystems. On the other side are the “long tail” of independent publishers and small brands, whose visibility and revenue are threatened by a shift they neither anticipated nor controlled. For the broader information ecosystem, that raises urgent questions about diversity, access and fairness.

As AI search consolidates control over how users access information, it is also redrawing the boundaries of competition across the digital economy. In the older ecosystem, advertisers could buy visibility through paid search, while smaller players could still attract attention through clever targeting or well-timed content. That balance is collapsing. With the arrival of AI-generated answers, the entire structure of ad valuation—what a view or a click is worth—has begun to blur. Publishers are the first to feel the shock. As previously discussed, reports from industry groups have shown significant declines in referral traffic following the rollout of AI Overviews, leading to a broader reduction in ad impressions and revenue across the publishing sector. Each missing visitor translates directly into fewer ad impressions and lower revenue.

The growing dependence on AI intermediaries has also begun to reshape competition itself. When visibility depends on being cited by the model, exposure tends to concentrate around the same limited set of domains. Studies of early search-answer data show that more than two-thirds of AI citations in consumer queries come from the top 1% of websites indexed by traffic and authority. This pattern mirrors older forms of consolidation in media and retail, where technological innovation initially promised openness but ended by rewarding scale.

The difference this time is that the filtering is invisible. Users no longer see what has been left out, and even specialists cannot easily trace why a system chose one source over another. Researchers studying human trust in AI search found that people are more likely to accept an AI-generated answer when it includes a citation, even if the cited material is only loosely related to the response. This creates a feedback loop of authority: the sites that are cited become more trusted, and their content is then more likely to be used in future AI training rounds. Over time, that cycle could reinforce the dominance of a small group of established information providers while making it harder for new entrants to gain recognition. Some researchers warn that this concentration of visibility could reshape the informational fabric of the web in ways that are difficult to reverse. The diversity that once defined the open internet—its mixture of large and small voices, commercial and independent sources—is being replaced by a curated layer optimized for coherence and brand safety. In the short term, users may find this version of search cleaner and less confusing. In the long term, they may find it narrower, with fewer perspectives and slower innovation.

For policymakers and technology firms, the challenge is to maintain transparency and diversity within these AI ecosystems without compromising quality. Several organizations, including the News Media Alliance and the European Publishers Council, have begun urging regulators to require disclosure of citation criteria and compensation models for AI-generated summaries. Their argument is not simply about fairness but about sustainability: if AI systems absorb information without returning traffic or revenue, the public sources that feed them may eventually dry up.

The transformation of search may seem technical, but its consequences reach far beyond the mechanics of browsing. The way information is delivered shapes what people learn, what they buy and how public attention circulates. In the age of AI summaries, the open web is slowly turning into a filtered web, where visibility must be negotiated rather than earned. The challenge ahead is not only how to design systems that answer questions efficiently, but how to preserve the plurality and independence that once defined the internet. For now, the most important decisions about what people see online are being made not by human editors or even by consumers themselves, but by algorithms trained to predict what appears most reliable. The future of knowledge on the internet may depend on whether those predictions continue to serve the public or begin to serve only themselves.

Leave a comment

Trending