top of page

Could You Soon Use DMA Search Data to Outrank Competitors in the EU?

  • Writer: All things tech
    All things tech
  • 3 hours ago
  • 7 min read

Could You Soon Use DMA Search Data to Outrank Competitors in the EU?

You know that feeling when you’re pretty sure your competitor didn’t “magically” get better content overnight… but they still jumped ahead of you on Google? Now the EU is floating a move that could make that mystery a little less mysterious. The idea: under the DMA, Google could be pushed to share anonymized search data (rankings, queries, clicks, views) with eligible rival search engines and even qualifying AI chatbots in the EU/EEA, on FRAND terms.

It’s not law yet, and it’s not a secret pass into Google’s index, but if it happens, it could change what “good search signals” look like for SEO teams and AI search products.


What the EU is actually considering (and what it’s not)


So about that “my competitor didn’t suddenly become a genius but somehow they’re outranking me” feeling. The EU’s current idea wouldn’t magically explain every ranking jump… but it could make parts of search performance less of a black box for anyone building a competing search product.


Here’s what the European Commission is actually considering under the Digital Markets Act (DMA): Google could be required to share specific categories of anonymized Google Search data with eligible rival search engines across the EU/EEA, and the access would be on FRAND terms (fair, reasonable, and non-discriminatory).


And when people say “search data,” they don’t mean a vague dashboard. The proposal calls out four buckets pretty clearly: ranking data, query data, click data, and view data.


Why the EU cares (in plain English)


The Commission’s stated goal is competition: if rivals can see aggregated, privacy-safe signals about what shows up, what gets clicked, and what gets seen, they can tune their own search quality and better “contest Google Search’s position.”


It’s also why the Commission is spelling out the “how,” not just the “you must.” The proposed measures cover things like:


  1. Who qualifies to receive the data (including the spicy question of whether AI chatbots can qualify as “online search engines”)

  2. How much data is in scope

  3. Methods + intervals for sharing it

  4. Anonymization standards

  5. FRAND pricing guidance

  6. Access procedures


The biggest misconception: “Is Google handing over its index?”


No. This is the part that’ll save you from a lot of bad LinkedIn takes.


The proposal doesn’t give anyone access to Google’s full index and it’s not a backstage pass to the algorithm. It’s closer to getting anonymized SERP performance signals—the kind of inputs a competing engine (or a qualifying AI search product) could use to evaluate and improve ranking systems—without getting Google’s entire corpus of crawled pages.


Also worth noting: this isn’t final yet. The measures are described as preliminary, with a public consultation open until May 1 and a final decision due by July 27.


What’s in scope: who qualifies, what data you’d get, and how often


If this moves forward, the big question isn’t “can I get the data?” It’s who counts as “eligible” and what Google would actually have to hand over once you pass that bar.


Who qualifies (and why “AI chatbot” isn’t automatically a yes)


The Commission’s draft measures talk about “data beneficiaries”—basically, eligible third parties operating search engines in the EEA.


Here’s the twist: AI chatbots can be in the running, but only if they qualify as an “online search engine” under the DMA definition.


That wording matters a lot. It hints at a future where some AI products get treated like search engines (because they retrieve and rank info for users), while others get told, “Nice try, you’re a chatbot, not a search service.”


What the access process could look like (real-life, not theory)


The proposal doesn’t just say “share data.” It also calls out procedures for how beneficiaries access the data, plus eligibility criteria, anonymization standards, and guidelines for determining FRAND pricing.


Translated into what you’d expect in practice, it likely looks something like:


  1. Application: you ask for access as a rival search engine (or as an AI chatbot provider arguing you qualify)

  2. Verification: Google and/or the Commission checks you meet the eligibility criteria

  3. Terms: you agree to the rules (privacy, permitted use, security) and the FRAND commercial setup

  4. Provisioning: you get access in the method/format the final rules specify


The important part: the “access steps” are part of the measures, not an afterthought.


What data you’d get (in human terms)


The four categories in scope are:


  • Query data: what people searched (in an anonymized form)

  • Ranking data: where results appeared for those searches

  • Click data: what results got clicked

  • View data: what results got seen/impressed (even without a click)


If you’re thinking “so… like a supercharged Search Console?” — kind of, but aimed at competitor search engines and qualifying AI search products, not site owners.


How often you’d get it (cadence) + the “anonymized” guardrails


The draft explicitly includes methods and intervals for sharing data and anonymization standards for personal data, with the technical details still to be finalized.


So cadence could land anywhere on a spectrum:


  • Near-real-time feeds (great for trend detection, expensive to run, heavy privacy scrutiny)

  • Batch deliveries (daily/weekly/monthly style dumps that are easier to govern)


Either way, the word doing a lot of work here is anonymized. The more privacy constraints tighten, the more you can expect trade-offs like less granular queries, fewer breakdowns, or thresholds that blur the “juicy” edge cases.


FRAND, pricing, and the part nobody puts on the conference slides


All of this sounds exciting until someone asks the question that makes every “future of search” panel suddenly cough and look at their shoes:


Who pays, how much, and what strings are attached?


The Commission’s draft measures don’t just say “share the data.” They also call out guidelines for determining FRAND pricing as a formal part of the package. That little line is where the whole thing can either become a real competitive shift… or a very expensive PDF announcement.


FRAND, minus the legal throat-clearing


FRAND means access has to be:


  • Fair: no “special deals” for your favorite rival

  • Reasonable: prices and conditions can’t be set to quietly scare everyone away

  • Non-discriminatory: similar players should get similar terms


Notice what FRAND doesn’t mean: “free” or “easy.”


What might get priced (even if nobody wants to say it out loud)


Because the measures also include the methods and intervals of data sharing and anonymization standards, there are real operational costs involved. In practice, pricing could end up tied to things like:


  • Volume (how many queries/records you request)

  • Freshness (faster delivery usually costs more to run and police)

  • Access method (API-style feed vs. periodic files)

  • Support + compliance overhead (security reviews, onboarding, monitoring)


If pricing lands too high, only well-funded rivals benefit. If it lands low, Google’s going to argue the privacy/security burden is being ignored—which is already part of their public pushback.


The trade-offs that shape how useful the data really is


The same proposal that promises access also bakes in guardrails: anonymization standards, access procedures, and the ability to set the “extent” of what’s shared. That’s where you can expect conference-slide-unfriendly realities like:


  • Minimum thresholds (small-volume queries might be withheld to reduce re-identification risk)

  • Aggregation (data grouped by time/region/category instead of raw detail)

  • Rate limits (to stop bulk extraction or weird “query probing”)

  • Usage rules (limits on onward sharing, retention periods, approved purposes)

  • Auditability (logs, compliance checks, “show us how you used it” clauses)


Net: FRAND access can still come with a lot of “yes, but…”—and those “buts” will decide whether DMA Google data sharing turns into an actual edge for rival search engines and eligible AI search products, or just a pricey, heavily-sanitized feed that’s good for broad trends and not much else.


Why marketers and AI search teams should care (and what to watch before July 27)


If you’re on the marketing side, you might be thinking: “Cool story… but I’m not an ‘eligible rival search engine.’ How does this touch my Tuesday?”


It matters because if rival engines (and possibly AI chatbots that qualify as online search engines) get access to these anonymized search signals, the way they rank, summarize, and cite sources in the EU/EEA can shift fast.


If you run SEO/brand: what could change in your day-to-day


You’re used to doing SEO with partial information: Search Console here, rank trackers there, and a lot of “maybe Google liked it” guesswork.


A data-sharing regime like this could make a few things less fuzzy for the broader ecosystem:


  • Competitive intel gets less vibe-basedRival search engines could use shared ranking/query/click/view signals to tune their own SERPs, which changes what “winning” looks like across EU search surfaces.

  • Better query-to-topic mapping (especially long-tail)Even in anonymized form, query + engagement patterns can reveal which topics actually get attention versus which ones just feel important in keyword tools.

  • Cleaner reads on SERP volatilityIf alternative engines improve faster, you may see more noticeable differences between “Google-only SEO” and “EU search visibility” across multiple engines.

  • Clearer paths to AI citation visibilityThe Commission explicitly contemplates access for AI chatbots that qualify as online search engines. If those products improve retrieval/ranking, getting cited may start to depend more on being retrievable and quotable, not just “#3 on Google.”


If you build AI search: why this is a big deal (without pretending it’s magic)


If you’re building an AI search product, you don’t need Google’s index to benefit from this. What you need is ground-truth-ish feedback: what tends to rank, what gets clicked, what gets viewed.


That’s exactly the kind of loop that helps with:


  • Ranking evaluation (are we producing results people would likely choose?)

  • Retrieval tuning (are we fetching the right set of candidates before we rank?)

  • Offline testing (comparing model changes against consistent engagement signals)


The proposed measures even frame the purpose pretty bluntly: helping third parties “optimise their search services and contest Google Search’s position.”


What to watch before July 27 (practical checklist)


Nothing here is final yet. The timing is the whole game.


Keep an eye on:


  1. Consultation deadline: May 1 — this is when the public feedback window closes.

  2. Final decision due: July 27 — this is the “are we really doing this?” date.

  3. Whether AI chatbots stay eligible — the proposal includes them if they meet the DMA’s definition of online search engines, and that detail could tighten or loosen.

  4. How strict “anonymized” ends up being — the proposal includes anonymization standards, and heavy privacy constraints can quietly turn “useful” into “mostly generic.”

  5. Google’s privacy/security objections — Google is already arguing the proposal would force handover of sensitive searches with “dangerously ineffective privacy protections.” That pushback can influence what the final rules look like.

Comments


bottom of page