Google is telling marketers to think about AI search as part of normal search. That might be technically convenient for Google, but it is a reporting mess for everyone else.
In Google’s own documentation, AI Overviews and AI Mode are treated as part of the same broader Search ecosystem, and site owners are told that traffic from those experiences is included inside regular Search Console web totals. At the exact moment Google is pushing AI Mode deeper into Chrome with a side by side browsing experience, agencies still cannot cleanly break out how much visibility or traffic came from AI search versus classic blue link search. If you run client reporting, that is not a minor product detail. It changes how you explain performance, diagnose losses, and defend budget.
Meanwhile, Microsoft is moving in the opposite direction. In February, Bing Webmaster Tools rolled out an AI Performance report in public preview that shows which URLs are cited, which queries trigger those citations, and how citation activity changes over time. On April 27, Search Engine Land reported that Microsoft also previewed more AI reporting features at SEO Week, including citation share, grounding query intent, and GEO focused recommendations.
That contrast is the story. Google wants AI search to feel like a natural extension of SEO. Bing is acknowledging that AI search behaves differently enough to deserve its own measurement layer. For agencies, healthcare marketers, and in house teams trying to prove ROI, Bing’s approach is a lot closer to reality.
The Real Problem Is Not Zero Click, It Is Blended Measurement
Most of the AI search conversation has focused on traffic loss. That matters, but it is not the hardest part of the job anymore. The harder problem is that the channel is changing faster than the reporting stack.
Google’s documentation on AI features and your website says AI Overviews and AI Mode surface relevant links, use query fan out across subtopics and data sources, and may show a wider and more diverse set of helpful links than classic web search. It also makes clear that AI features and classic Search are intertwined from the website owner’s point of view.
For a marketer, that creates a basic attribution issue. If a page loses clicks but gains AI mentions, the standard dashboard usually reads that as decline. If branded search volume rises because people saw your company named in an AI answer three days earlier, standard reporting often gives the credit to direct or brand search, not the earlier AI touchpoint. If a prospect converts after being cited in ChatGPT, compared in Google AI Mode, and then returning through a branded query, your monthly SEO report barely captures the sequence.
That is why the old argument, “just watch organic traffic,” has stopped being useful.
Organic traffic still matters. Rankings still matter. But neither tells the full story when the search interface itself is summarizing, comparing, and filtering before the click ever happens.
Google’s Chrome Expansion Makes the Reporting Gap Worse
Google’s April 16 announcement, A new way to explore the web with AI Mode in Chrome, is easy to read as a usability update. The company framed it as a way to stop tab hopping by letting users keep AI Mode open while viewing publisher pages side by side.
That sounds harmless until you think about how user behavior changes inside that layout.
When AI Mode stays open next to the page, Google remains the primary interface. The website becomes supporting material. Users can compare pages, ask follow up questions, add tabs, files, and images into the experience, and continue the session without ever returning to a classic results page. The implication is obvious: the click is no longer the main event. The AI layer is.
For publishers and agencies, that means more assisted influence and less directly observable traffic behavior. A page can matter a lot in the decision process while generating fewer measurable visits than it would have in a traditional SERP flow.
That is exactly why blended reporting is dangerous. It encourages teams to collapse two different behaviors into one line item called organic search. But classic search and AI assisted search do not work the same way, and they should not be interpreted the same way.

Bing Is Quietly Building the Reporting Marketers Actually Need
Microsoft is not perfect here, but it is at least naming the problem.
According to Search Engine Land’s February coverage of Bing Webmaster Tools AI Performance, the dashboard shows:
- total citations
- average cited pages
- grounding queries
- page level citation activity
- visibility trends over time
Those are not vanity metrics. They are early answers to a real operational problem: if AI engines use your content to ground an answer, how often does that happen, on which pages, and for what intents?
Then came the April 27 update from Search Engine Land, which reported that Microsoft previewed citation share, grounding query intent, and GEO focused recommendations at SEO Week in New York.
Even if those additions are not live yet, the product direction matters. Microsoft is signaling that AI discovery deserves its own reporting vocabulary. Not just clicks, not just impressions, not just average position.
That is the right mental model.
AI search introduces at least four layers of value that traditional SEO dashboards routinely miss:
- Citation presence: Were you included at all?
- Citation share: How often were you included compared to competitors?
- Grounding intent: Which kinds of questions are using your pages as evidence?
- Downstream lift: Did those mentions increase branded search, direct visits, lead quality, or sales velocity later?
If your reporting does not cover those layers, you are not actually measuring AI visibility. You are measuring the leftovers.
What Agencies Should Report Instead of Pretending Nothing Changed
This is the part a lot of teams skip. They accept that AI search is changing behavior, then continue shipping the same SEO deck with a fresh paragraph at the top.
That is not enough.
A better reporting framework for 2026 should include five buckets.
1. AI citation visibility
Track whether the brand appears in Google AI Overviews, Google AI Mode, ChatGPT, Perplexity, and Bing Copilot for a defined query set. This can be manual at small scale or automated with purpose built monitoring.
The point is not to prove perfect precision. The point is to show whether the brand is part of the answer set at all.
2. Competitor citation share
Visibility only means so much in isolation. If your client appears on 12% of tracked AI queries and the top competitor appears on 41%, the real story is not “you had mentions.” The story is market share.
This is why Microsoft’s teased citation share metric is so interesting. It gets closer to the strategic question clients care about: are we gaining ground or losing it?
3. Assisted branded demand
Track branded search impressions, direct traffic, and branded conversions alongside AI visibility changes. You are looking for correlated lift, not a cartoonishly simple one to one model.
When someone sees your brand cited repeatedly in AI answers, they may come back later through a branded query, a direct visit, or even a sales conversation where they already trust the name. That influence is real, even when analytics platforms struggle to label it cleanly.
4. Page level influence
Which pages are most often cited, summarized, or used to ground comparisons? In many accounts, it will not be the pages that used to dominate organic traffic.
Comparison pages, service pages, FAQs, location pages, and deep explainer pages often become more important in AI systems because they answer specific intent with clear structure. If those pages matter more now, content strategy and internal linking need to follow that reality.
5. Business outcomes tied to AI visible pages
This is the end game. Which AI visible pages are tied to qualified leads, admits, booked calls, requests for proposals, or revenue?
Seasons in Malibu is a good example of why this matters. The account holds 4,200+ keyword rankings and averages 5 patient admits per month driven directly through Emarketed’s marketing. More importantly for this conversation, AI mentions grew from 49 to 122. In other words, visibility inside AI systems rose while the broader zero click environment made traditional click data less complete as a standalone success metric.
If you only reported traffic, you would miss the strategic win.

Healthcare and B2B Teams Are Feeling This First
This measurement problem is especially sharp in healthcare and B2B.
Healthcare journeys are research heavy, trust sensitive, and often emotionally loaded. A family member might ask an AI engine about treatment options, compare providers, verify insurance questions, then visit only one or two sites. The provider that influenced the decision may not be the provider that earned the first click.
B2B behaves similarly, just with a different buying committee. Buyers use AI systems to compress vendor research, compare capabilities, summarize categories, and sanity check claims before they ever fill out a form.
In both cases, AI search acts like a pre qualification layer. It narrows the field before analytics tools see much of anything.
That means agencies serving healthcare, industrial, legal, professional services, and high consideration local businesses need better evidence than rank trackers and organic sessions alone.
Google’s Framing Is Understandable, but Marketers Should Push Back
To be fair, Google is not hiding the fact that AI features are part of Search. From the platform’s perspective, this makes sense. AI Overviews and AI Mode both pull from the web, surface links, and aim to help users explore more efficiently.
But platform logic is not marketer logic.
A platform wants continuity. A marketer needs diagnostic clarity.
When one user behavior pattern involves scanning ten blue links and another involves reading a synthesized answer that may cite four sources, ask follow up questions, and keep the user inside Chrome side by side, those are not the same experience. Treating them as one reporting bucket makes decision making worse.
It also creates an incentive problem. If Google blends AI performance into standard totals, then platform level disruption becomes harder for site owners to isolate, quantify, and challenge.
That does not mean Google is acting maliciously. It means marketers should be careful not to inherit Google’s framing uncritically.
What To Do Monday Morning
If you run marketing for clients or an in house brand, here is the practical move.
First, split your thinking even if the platform does not split the data. Create an internal distinction between classic search performance and AI search influence.
Second, build a controlled query set. Track your core commercial, comparison, problem aware, and branded prompts across the major AI surfaces every month.
Third, map cited URLs to business outcomes. Do not just ask which pages got mentioned. Ask which mentioned pages are tied to pipeline.
Fourth, add narrative to reporting. If AI visibility increased while organic clicks softened, explain why that can happen and what downstream signals you are watching.
Fifth, stop treating the absence of clean platform reporting as an excuse to do nothing. Imperfect measurement beats false certainty.
The agencies that win this phase of search will not be the ones with the prettiest rankings chart. They will be the ones that build a believable model for influence before the click.

FAQ
Does Google Search Console separate AI Mode traffic from normal search traffic?
Not in the clean way most marketers want. Google’s documentation treats AI features as part of Search, and site owners generally see those results blended into existing reporting totals rather than broken into a dedicated AI search channel.
Why is Bing Webmaster Tools AI reporting a big deal?
Because it starts measuring citation behavior directly. Bing’s AI Performance reporting shows which URLs are cited, which queries trigger those citations, and how citation activity changes over time. That is closer to how AI discovery actually works than a clicks only view.
What should agencies measure if AI search lowers clicks?
Measure citation visibility, competitor citation share, grounded query intent, branded search lift, page level influence, and the business outcomes tied to pages that show up inside AI answers.
Is AI visibility valuable if it does not generate immediate traffic?
Yes. AI visibility can shape consideration before a user ever clicks. Many people see a brand in an AI answer, compare options, and return later through branded search, direct traffic, or an offline sales conversation.
Which businesses need AI search reporting most urgently?
Healthcare, B2B, local service, and high consideration businesses need it first because their buyers often research deeply, compare options, and make decisions after multiple invisible touchpoints.
Should we stop caring about rankings and organic traffic?
No. Those metrics still matter. They are just incomplete on their own. The better approach is to keep classic SEO metrics while adding a separate layer for AI visibility and assisted influence.
The next version of search reporting is going to look less tidy than the last one. I’m glad Microsoft is at least admitting that. Google may catch up eventually. Until then, agencies need to build their own measurement model instead of pretending AI search behaves like ten blue links with a fresh coat of paint.