Blog
-
6
min read

AI Models Are Lying About Your Advertising Results: How to Fix It

In 2026, winners will not be those who simply use the most AI tools, but those who know how to evaluate AI critically.
By
Ieva Ramanauskaitė
-
Sep 23, 2025

At the end of 2025, digital advertising platforms – “Google Ads” and “Meta Ads” – began widely applying AI-based attribution models and statistical conversion modeling. In simple terms, they automatically “assign” conversions to your ads even in cases where no clear user action was recorded (for example, no click). At first glance, businesses see better performance: conversion numbers increase, CPA goes down, ROAS improves. However, the real situation can be much less optimistic – in reality, there are not that many new customers or sales, and return on investment may even be declining. Why does this happen and what should you do about it?

Why reporting doesn’t reflect reality and how it happens

The main reasons why AI-based attribution models can “beautify” your advertising data are the following:

  • Privacy changes and missing data.
    After privacy changes such as “Apple” iOS 14.5 (allowing users to opt out of tracking in apps) and stricter cookie consent requirements in browsers (where data about user actions and behavior cannot be collected without explicit permission), advertising platforms found it increasingly difficult to track user actions.
    Their solution was statistical conversion modeling, in which AI “guesses” conversions based on incomplete data. For example, if a user refuses cookie tracking, “Google” or “Meta” can still count a conversion based on indirect signals, even when no real tracked event exists.
  • AI systems now distribute credit for purchases or leads more “generously.”
    For instance, if a person saw your ad, forgot about it, and after a few days returned through a “Google” search and purchased – the advertising platform may still claim the sale as its own.

The same happens with “Meta” (“Facebook” and “Instagram”): if a person only viewed the ad within 24 hours before purchase, the system can also count that purchase for itself.

The result: reports show more sales than actually happened, because the same real conversion is counted on multiple platforms simultaneously. The website had one purchase, but “Google Ads” and “Meta Ads” each report one “sale.”

  • When the platform cannot see all sales or leads, it simply guesses them.
    If due to cookie restrictions or privacy settings “Google” or “Meta” cannot accurately see who purchased or submitted a lead, AI models fill in the blanks with predictions.


For example, if the system sees that a person viewed an ad but cannot track what happened afterward, it may decide that a purchase “most likely” occurred – because similar users often purchase in that situation.


Such a conversion appears in the dashboard as a real result, even though the platform never saw the purchase itself. Therefore, “Google Ads” may show more sales than GA4 or your ecommerce system. This is especially frequent in “Performance Max” campaigns.


“Meta” does the same: even if a user opts out of tracking, the system may still show that a “purchase happened.” Sometimes it is visible directly – the report indicates 2 “purchases” but revenue = €0, because the system could not link that “purchase” to an actual order.

  • Double-counting across all channels.
    The more AI and automation are used, the more marketing channels overlap. If a user saw your “YouTube” ad, then clicked a Google search ad, and finally purchased by going directly to your website, the “Google DDA” model will attribute partial credit to both YouTube and the search ad.


At the same time, if that user also viewed a “Meta” ad during the journey, the Meta 7+1 model may also assign credit for the viewed ad. Thus, multiple platforms “share” the same buyer, each counting a conversion for itself.

Without unified attribution, when the numbers are added together, reports suggest more conversions than actually happened.

Result: impressive yet unrealistic numbers in dashboards

During the era of keyword campaigns, we were used to a 1:1 relationship (clicked – purchased). Today, AI models act like a black box: they distribute conversion “credit” so widely that part of the conversions are merely algorithmic guesses or duplicates. If a business blindly trusts platform-reported metrics, it risks increasing budgets for non-existent sales or celebrating improved ROAS while nothing is improving in the financial reality. As analysts point out, when conversion data is misleading, you may scale budgets for the wrong campaigns, pause the ones that are actually profitable, or form an entirely inaccurate view of ROI – and this now happens more often than people think.

How to recognize whether your data is real

The first step is diagnosing whether you are dealing with misleading data. Below are indicators and diagnostic steps that will help you determine whether advertising dashboards correspond to reality in your case.

  1. Compare multiple sources.
    Check whether the conversion numbers shown in “Google Ads” match your “Google Analytics 4” (or other analytics platform). Small discrepancies are normal, but if “Google Ads” shows far more conversions than “GA4” for the same period or campaign – that’s a red flag. 

Recent observations show that in some cases “Google Ads” conversion numbers have “aggressively” increased relative to “GA4,” precisely because of broad attribution and modeling. For example, if “GA4” attributes 50 purchases from the “google/cpc” channel, but “Google Ads” reports 100 conversions, it is clear that ~50 are “additional” AI conversions.

Likewise, compare “Facebook Ads” vs your ecommerce system: do the purchase numbers match real orders? If not – part of those purchases may be modeled.

  1. Monitor signals of modeled conversions.
    Although “Google” does not clearly separate observed vs modeled conversions, you can estimate the scale.

One method is a regional test: in one country temporarily disable “Consent Mode” (allow full tracking with all cookies if legally permitted) and compare it with a country where “Consent Mode” remains active. If conversion volume remains similar without “Consent Mode,” modeling wasn’t contributing much. If conversions suddenly drop – you see how much was previously added through modeling.

Another method is using “GA4 Modelled Data Export”: in GA4 Admin you can see what percent of conversions come from observed vs modeled data (when “Consent Mode” is used). If, for example, 30% of conversions are “modeled,” keep in mind these are statistical assumptions. When making decisions (e.g., increasing budget), evaluate whether that +30% truly exists.

In “Meta,” monitor “Observed vs Modeled” metrics (“Meta Business Manager” sometimes shows how many conversions were measured vs extrapolated).

If using “Meta Incremental Attribution,” analyze it – “Meta” states that this model shows ~20% fewer conversions, but it is closer to reality. This indicates that the standard Meta model tended to overestimate performance, while the new one shows a more realistic picture. These data points help reveal how much of your metrics are firm and how much are “elastic.”

  1. Evaluate data logic against business results.
    Look at the full picture: sum all revenue claimed in advertising platforms (Google, Meta, LinkedIn, etc.) and compare it with real revenue in the same period. In many cases, the aggregated “revenue from ads” will be higher than your real sales because each platform claims credit.

Instead of relying on platform ROAS, calculate the Media Efficiency Ratio (MER): total revenue divided by total advertising spend. This metric shows true ROI across all channels (it does not allow one sale to be counted twice). If platform-reported ROAS is rising, but MER is flat or falling – dashboards are “prettier” than reality. Analysts recommend comparing platform ROAS with an “Adjusted ROAS,” corrected using independent data, and monitoring whether the differences are dramatic.

  1. Talk to the sales team (B2B).
    If your business is B2B and the goal of advertising is lead generation, talk to those who handle leads. Are the “100 conversions” reported by “Google” or “Meta” actually visible in the CRM? Are there form submissions or micro-conversions that later turn out to be low-quality?

If the sales team says there are only 20 real qualified leads rather than 100 – you have a clear signal that platforms are over-reporting value. That is why since 2024 “Google” has urged advertisers to import offline conversions (CRM data) back to “Ads”; if you do not, AI optimizes for “form submissions,” not for real revenue.

In short: independent verification of data is essential

Recent technical and attribution changes have significantly distorted “Google Ads” numbers, often creating the impression that campaigns are much more effective than what “GA4” or your internal analytics would show. Before celebrating improving metrics, it is crucial to do the homework and make sure that “conversions” are not an illusion.

Practical advice: how to reclaim real numbers

Once the problem is identified, the next step is to ensure that your advertising data again reflects reality. Below are specific recommendations for managing misleading AI attribution in both B2C and B2B contexts.

  • Do not rely solely on platform dashboards.
    The smarter advertising systems become, the more important it is to have an independent analytics source. Set up and configure “Google Analytics 4” (or another analytics platform) and make sure it collects UTM tags from all campaigns. UTM tags are special tracking parameters in URLs that allow you to see exactly where a customer came from and which ad actually drove the result.

GA4 is not perfect, but at least it will not try to credit “Google Ads” when a conversion actually came from another channel. Also connect accounts (“Google Ads” to GA4, “Meta” to GA4 via “Data Stream” or “Conversions API”) – this helps partially align attribution models and avoid basic discrepancies. And of course, compare regularly: GA4 vs “Ads”, CRM vs “Ads”, ecommerce DB vs “Facebook Ads”, etc. Deviations should not be ignored. Continuous cross-checking ensures that decisions are based on more than one version of results.

  • Drill into the ratio of observed vs modeled conversions.
    Even though “Google” does not easily reveal what is observed vs modeled, you can estimate the magnitude.

One method – regional testing: disable “Consent Mode” in one market (if legally possible) and compare it with another market where it stays active. If conversion volume suddenly falls without “Consent Mode,” you can quantify the prior modeling effect.

Another method – use “GA4 Modelled Data Export” to see the proportion of observed vs modeled conversions. If 30% of conversions are “modeled,” remember that these are statistical guesses.

  • Use independent verification tools and methods.
    A single “Google Analytics” view is not enough. Consider third-party attribution platforms or at least independent verification. Examples: “Wicked Reports,” “Triple Whale,” “Northbeam.”

Also, CRM integrations: use tools like “Zapier,” “LeadsBridge,” etc., to match every sale or lead to its source. This allows you to manually trace every result.

Another simple approach – sandbox testing and UTM audits. Make sure every ad link has correct UTM parameters. Then, when a conversion happens, even if “Google” or “Meta” claims it without a click, “Google Analytics” will still show the real source. Consistent UTM usage prevents “mysterious” conversions landing in the wrong channel.

Finally, monitor the “Media Efficiency Ratio” (MER) – as mentioned earlier, this is an independent indicator of overall marketing efficiency. If MER is strong, your business is growing even if platforms exaggerate results. If MER is weak while platform ROAS looks excellent – you have a problem that dashboards will not reveal.

  • Install server-side tracking and fill data gaps.
    A technical but highly effective solution is server-side tracking. Instead of relying on fragile browser cookies, send conversion data directly from your server to “Google” and “Meta.”

For example, implement “Google Ads Conversion API” and “Meta Conversions API” (via server-side “Google Tag Manager” or similar tools). This way you send confirmed conversions with user identifiers rather than leaving attribution to guesswork. This reduces dependence on modeled conversions and allows sending richer information (hashed email, purchase value, margin). “Google” can then associate conversions with specific ads across devices.


In B2C, server-side + “Enhanced Conversions” helps “recover” lost conversions (especially on mobile) and attribute revenue more accurately.

In B2B, importing offline conversions – CRM data on real deals – is critical. This teaches “Google” and “Meta” to optimize for actual sales rather than just form submissions. Although requiring technical work, this has become the new standard in advanced marketing teams.

  • Do not abandon human control and healthy skepticism.
    AI and automation do not mean that you can relax. On the contrary – data quality will become a competitive advantage in 2026. Businesses that invest time and resources in verifying data will outperform those who blindly follow algorithmic recommendations.

Adopt the principle: trust, but verify. Conduct periodic purchase/lead tests (create test conversions and check whether they appear correctly across platforms). If dashboards show more conversions than you manually created – you have a clear indicator of over-counting.

Also ask your customers. Post-purchase questionnaires (“How did you find us? What influenced your decision to buy?”) can provide valuable insight. If most customers mention organic search or recommendations, not ads – treating dashboards as absolute truth would be a mistake.

Finally, train specialists (or yourself) to become data coaches. Do not put everything on autopilot. Regular audits, verification, and AI supervision will give you an advantage. While competitors celebrate “inflated” metrics, you will rely on real data and make sharper decisions.

Why this matters

The digital advertising industry is entering a stage where lack of transparency in AI-generated performance metrics can cost businesses real money.

In 2026, winners will not be those who simply use the most AI tools, but those who know how to evaluate AI critically. We are already seeing dashboards that look excellent while real revenue does not grow.

This will affect everyone – both large B2C ecommerce brands and niche B2B service providers.

Therefore, prepare now: clean up your data, restore human supervision, and turn analytics quality into your competitive advantage. In the end, AI models must serve you – not the other way around. With accurate data and a clear picture, you will enter 2026 equipped to make the right marketing decisions.

Ready to Accelerate Your Business?
  • Holistic Digital Solutions
  • ROI-Focused Campaigns
  • Data-Driven Decision Making
  • Strategic Content Creation
  • Cross-Channel Synergy
Contact us
HERE IS WHAT WE DO

APG Media HUB

APG Media offers holistic, growth-focused digital marketing solutions for businesses with a purpose to grow.

As part of our commitment to staying at the forefront of the industry, our range of services is continually expanding. Our core services include:

Your Digital Marketing Partner