The State of Location Data in Ad Tech 2026: Privacy, Quality, and the Age of AI

Get data for any location

Start your search

Introduction

The advertising technology industry is at an inflection point. Three forces are converging in 2026 that will reshape how companies build audiences, measure campaigns, and extract value from location data: state-level privacy regulation is more aggressive; data quality has moved from a nice-to-have to a business-critical requirement; and artificial intelligence, despite enormous promise, is still searching for its footing in ad tech workflows.

For teams that build products around mobile identifiers, geolocation, and audience targeting, these aren't abstract trends. They're forcing real decisions about data sourcing, product architecture, and go-to-market strategy. This article examines each of these forces, explains what's actually changing, and offers a perspective on where location intelligence fits in the evolving ad tech landscape.

Section 1: Privacy Regulation Is Now Operational

Privacy regulation in ad tech isn't new. Companies have been adapting to evolving data privacy requirements for years. But the pace of change is accelerating. New states are introducing legislation with increasingly specific restrictions on the collection and sale of users' data, and frameworks vary across jurisdictions, so staying compliant requires constant attention. It also means companies need to be increasingly deliberate about who they partner with, and whether those partners can meet the same standard.

The New Regulatory Reality

Two state laws illustrate how quickly the ground has shifted. Oregon's amended Consumer Privacy Act (HB 2008), effective January 1, 2026, bans the outright sale of precise geolocation data, defined as data that can pinpoint a consumer or device within a 1,750-foot radius.¹ ² There is no opt-in exception. If the data meets the precision threshold and it's being sold, it's a violation, regardless of whether the consumer consented. Oregon's definition of "sale" is also notably broad, potentially reaching bundling arrangements, licensing deals, and derivative product creation.¹ The law also bans the sale of personal data from consumers under 16 for targeted advertising or profiling, with no consent override.

Maryland's Online Data Privacy Act (MODPA), effective October 1, 2025, goes further still.³ Where Oregon bans the sale, Maryland bans even the processing of sensitive data, including geolocation, unless it is "strictly necessary" to provide a product or service the consumer requested.⁴ Consent alone is not enough. Companies must demonstrate a documented business necessity. At least one location analytics provider has publicly disclosed that MODPA forced it to stop collecting and processing mobility data within Maryland entirely, resulting in a measurable reduction in trip data.⁵

These aren't isolated experiments. Industry analysts expect additional states to follow within 12 to 24 months, with active proposals in Massachusetts, Illinois, and New York.⁹ The Nebraska Attorney General's recent public remarks to ad tech industry leaders offered a notable counterpoint, a middle-of-the-road stance emphasizing consumer education alongside the practical importance of location data. But the direction of travel is clear. And internationally, the EU's ePrivacy framework and the UK's post-Brexit data regime continue to evolve, making consent management a global operational challenge, not just a domestic one.

What This Means for Advertising Platforms

The operational challenge for platforms is that these laws don't just restrict what you can do with data. They change what you need to know about every signal entering your system. Where was it collected? Under what consent framework? Does that framework satisfy the strictest applicable law, not just the jurisdiction where it was collected, but every jurisdiction where it will be activated? What sensitive locations have been filtered, and was that filtering done before ingestion or after? These are no longer legal-team-only questions. They're product architecture decisions, and they require partners who can answer them with specificity.

Privacy-Aware vs. Privacy-Forward

The gap between privacy-aware and privacy-forward matters here. A privacy-aware company responds to regulation as it arrives: updating policies, adding disclosures, pulling data from states that ban it. A privacy-forward company builds its infrastructure around the assumption that regulation will tighten, that consent standards will rise, and that the safest position is to exceed what's required today rather than scramble to meet what's required tomorrow.

That means sourcing data exclusively from devices where consumers have opted in through clear, affirmative consent. It means applying privacy-enhancing technology at the point of ingestion: identifying and removing signals generated at sensitive locations so that data never enters a client's environment with privacy risk attached. And it means maintaining these standards globally, not just where laws currently mandate them. A partner that applies sensitive-location filtering only in Maryland, or consent-based sourcing only in the EU, is betting that other jurisdictions won't catch up. Every quarter, that bet gets harder to justify.

Section 2: CTV, In-Housing, and the Identity Challenge

While privacy regulation is reshaping data supply, the demand side is changing just as fast. Two trends stand out: the growth of connected TV and the move toward in-house identity infrastructure. Both are raising the bar for what platforms need from their location data partners.

Connected TV Changes the Identity Equation

U.S. CTV ad spending is projected to reach approximately $38 billion in 2026, up from $33 billion in 2025, with streaming capturing a record 47.5% of all U.S. TV viewing as of December 2025. Over 90% of CTV ad spend is now transacted programmatically. For advertisers, CTV offers what linear TV never could: household-level targeting, cross-device measurement, and the ability to tie ad exposure to real-world behavior.

But CTV also introduces a new identity challenge. A CTV device ID isn't a mobile advertising ID. It lives in a different ecosystem, with different signals and different measurement constraints. Linking CTV impressions to real-world outcomes requires connecting CTV IDs to mobile identifiers, hashed emails, and behavioral data — and those connections only hold up if the underlying location signals are verified. A CTV-to-MAID linkage built on IP-derived coordinates or synthetic device IDs doesn't give a platform a cross-channel view. It gives them cross-channel fiction.

The In-Housing Shift

Brands and platforms are increasingly building their own identity graphs rather than relying on pre-packaged audience segments from third parties. The motivation is control: control over data freshness, audience composition, and the ability to activate across channels on their own timeline. Retail media networks, DSPs, and large advertisers are all investing in infrastructure to link MAIDs with hashed email addresses (HEMs), frequently leveraged IP addresses (FLIPs), CTV IDs, and universal identifiers like UID2. These linkages form identity graphs that persist even as individual identifiers degrade or turn over.

This is where location data and identity data reinforce each other. Location signals verify that an identity cluster represents real human behavior. A MAID linked to a HEM that shows consistent commute patterns, regular gym visits, and weekly grocery trips is far more likely to represent an actual person than an orphaned device ID with no behavioral footprint. And identity linkages give location data cross-channel reach: a visitation pattern tied to a HEM can be activated in email, CTV, and programmatic display, not just mobile.

Data Linkages as a Verification Layer

An increasingly important function of data linkages is verification. A synthetic MAID generated by duplicating device IDs and changing a few characters is unlikely to have a legitimate hashed email associated with it, and it certainly won't have persona data like "yoga enthusiast" or "frequent traveler" attached. The presence of linkages serves as evidence that a signal is more likely to represent real human behavior.

The result is a layered approach to data quality: analytics at the signal level identify anomalies and behavioral patterns, while data linkages at the device level verify that the identifier represents a real person with real characteristics. Together, they move the industry from a world where "clean data" is a marketing claim to one where data quality can be measured, tested, and demonstrated.

The platforms building their own identity infrastructure — and connecting it across mobile, CTV, and email — need location data that arrives with both privacy compliance and the analytical depth to make these linkages meaningful.

Section 3: Data Quality Is No Longer Optional

For most of ad tech's recent history, data quality was an afterthought. Impressions were cheap, audience segments were abundant, and the cost of delivering ads to synthetic devices wasn't worth the effort to weed them out. If your models produced a positive-enough return on ad spend, nobody asked too many questions about what was in the underlying data. That era is ending. As campaigns have become more expensive and as ROAS has become the metric that matters most, the tolerance for garbage-in-garbage-out has evaporated. Teams are discovering that models built on unverified data don't just produce mediocre results — they produce unreliable results that erode trust in the entire measurement framework.

The Problem with Single-Source Data

The default in much of ad tech is still to source mobile location data from a single broker or reseller. The transaction is straightforward: you receive a file of device IDs, coordinates, and timestamps. What you don't receive is any context about what those signals actually represent. Is that device ID attached to a human being, or is it a fabricated identifier with a few characters changed? The data arrives, and the platform is left to figure out what's real and what's noise on its own.

This is the norm, and it's how a lot of ad tech still operates. Most brokers and resellers who sit between the original data source and the end platform haven't built the infrastructure to provide forensic indicators, quality signals, or behavioral metadata alongside the data itself. The result is a commodity product: coordinates without context.

What a Forensic Intelligence Layer Actually Looks Like

Unacast approaches this differently. Rather than delivering raw signals and leaving quality assessment to the buyer, Unacast applies analytical layers at the point of ingestion that help platforms understand what they're actually looking at before the data ever enters their environment. It's the difference between a vendor that moves volume and a trusted partner that thinks deeply about both the data and its privacy implications.

These forensic capabilities are built-in analytics that give platforms more context and insight on every signal in the dataset. Some help identify signals that are likely synthetic, meaning coordinates generated by the commercial ecosystem to inflate metrics rather than reflecting actual device movement. Others provide behavioral context: whether a device is likely driving, whether a signal is IP-derived rather than GPS-sourced, whether a movement pattern is consistent with real-world human behavior or suggests something anomalous. Unacast includes 24 of these analytics with its location data, each one adding a layer of intelligence that platforms can use to make more informed decisions about what to do with every signal they receive.

The value isn't just in filtering. It's in precision. A platform building a retargeting audience around store visitation can exclude signals flagged as likely driving or IP-derived, keeping only the visits that reflect someone actually walking through a door. A platform building an identity graph can use these analytics to weigh certain signals more heavily than others, producing clusters that more accurately represent real people with real behavioral patterns. The more context a platform has on each signal, the more intelligent its audiences, measurement, and identity resolution become.

Why This Matters Now

The shift toward data quality urgency was driven by the accumulation of poor outcomes: campaigns that didn't convert, attribution models that told a story the brand couldn't verify, and audience segments that looked impressive on paper but didn't drive incremental results. When synthetic data inflates an audience, every downstream metric is compromised. The campaign "reaches" devices that don't represent real consumers. The attribution model "proves" lift that never occurred. The brand pays for performance that was never delivered.

Brands are increasingly interested in incrementality, which means understanding not just who their loyal customers are, but finding the people they didn't already know about. That requires data that represents actual human behavior, not synthetic noise. And it requires the ability to distinguish between a device that genuinely visited a competitor's location and one that merely appears to have done so because of a spoofed signal or an IP-derived coordinate.

Section 4: AI in Ad Tech: Promise, Reality, and What Comes Next

If you've attended an ad tech conference, visited a vendor's booth, or read any industry publication in the last 18 months, you've been told that AI is transforming advertising. And at a high level, that's not wrong. But what's actually working? Where is the industry still guessing? And what does any of it have to do with location data?

The Gap Between Promise and Reality

The IAB's January 2026 research, conducted in partnership with Sonata Insights, paints a more complicated picture than the headlines suggest. The study found that 82 percent of ad executives believe younger consumers feel positive about AI-generated ads, but only 45 percent of those consumers actually feel that way, a perception gap that has widened from 32 points in 2024 to 37 points in 2026.⁶ And despite more than 70 percent of marketers reporting they've encountered AI-related issues like hallucinations, bias, or off-brand content, fewer than 35 percent plan to increase their investment in AI governance or brand integrity oversight.⁸

IAB Tech Lab CEO Anthony Katsur captured the industry's position well when he predicted that ad tech should expect "several false starts" in deploying agentic AI solutions, noting that practical application will require years of experimentation, standardization, and alignment across platforms, agencies, and publishers.⁷

The tension is palpable at agencies. Leaders report feeling simultaneous pressure from clients to be visibly doing something with AI while also operating under greater scrutiny than ever before. The result is a lot of motion without necessarily a lot of progress: pilot programs that don't scale, tools that automate the wrong workflows, and AI-generated creative that damages brand equity rather than enhancing it.¹⁰

The Problem Is What You're Feeding AI

Much of the industry's disappointment with AI in ad tech comes down to a familiar problem: garbage in, garbage out. AI doesn't evaluate whether the data it's working with is trustworthy. It finds patterns. If the dataset is full of synthetic signals, IP-derived coordinates passed off as GPS, and device IDs that don't represent real devices, AI will find patterns in that noise and build audiences with absolute confidence. The model won't know the difference between a real consumer who visited a competitor's store and a spoofed device ID that happened to generate a coordinate near that location. It doesn't have judgment. It has pattern recognition.

This is where the signal-level analytics discussed previously connect directly to AI performance. A platform feeding raw, unanalyzed location data into an AI-powered audience builder is asking the model to sort signal from noise on its own, a task it was never designed to do. But a platform feeding data that already carries 24 layers of contextual analytics, where every signal has been assessed for whether it's synthetic, IP-derived, likely driving, or behaviorally implausible, is giving the model a fundamentally better starting point. The AI isn't doing less work. It's doing the right work, because the data underneath it has already been interrogated.

The same applies to identity resolution. AI models that build and maintain identity graphs perform better when the inputs carry quality indicators. A MAID linked to a HEM with verified behavioral patterns gives the model something solid to cluster around. An orphaned device ID with no linkages and an IP-derived coordinate gives it noise to memorize. The difference in output quality between these two scenarios compounds across billions of signals.

Where AI Is Actually Changing Workflows

Despite the broader disappointment, there are real applications emerging. One major e-commerce marketplace has built an internal tool that allows brands to construct advertising campaigns using natural language. Essentially a chat interface where a marketer can type something like "build me a campaign targeting new homeowners in suburban Atlanta" and receive a deployable audience with associated creative recommendations. The system uses what's known as Model Context Protocol (MCP) to translate natural-language queries into audience builds and campaign parameters.

This kind of interface represents a genuine shift in how campaigns get built. Instead of a media buyer spending weeks assembling a targeting plan, navigating platform-specific tools, and waiting for results, the process collapses to minutes. The AI interprets what's in the data lake (device locations, behavioral patterns, demographic affinities) and assembles a tailored audience on demand.

But it reinforces the same dependency. The faster and more automated the audience-building process becomes, the more important it is that the underlying data is already clean, contextualized, and verified before it enters the pipeline. Speed without quality just means you're making bad decisions faster.

The Future: Location Intelligence as a Natural Language Interface

The most compelling vision for AI and location data is making location intelligence accessible to people who aren't data scientists. Today, extracting insight from location data requires technical expertise: writing queries, understanding privacy-level barriers, navigating complex datasets. The end insight (creating a given audience) is conceptually simple, but the process to get there is increasingly complex.

Where AI may actually move the needle in ad tech is in collapsing that process: enabling product leaders, brand marketers, and business analysts to query location data in natural language and receive actionable answers without requiring a data engineering pipeline. None of that happens quickly. It will take real investment in infrastructure, data compatibility, and privacy-safe delivery mechanisms. But it points toward a shift from location data as a specialized technical input to location intelligence as something broadly accessible.

None of that works if the underlying data isn't trustworthy. You can't build a natural-language interface on top of unverified signals and expect reliable answers. The forensic analytics, the privacy compliance, the identity linkages: these aren't separate from the AI story. They're the infrastructure that makes AI in ad tech actually deliver on what it's been promising.

Looking Ahead

The ad tech industry has spent the last several years managing disruption: signal loss from browser changes, evolving device identifiers, and an ever-growing patchwork of state privacy laws. In 2026 and beyond, the companies that thrive won't be the ones that found clever workarounds for each individual challenge. They'll be the ones that recognized these forces as interconnected and built their data strategy accordingly.

Privacy regulation is tightening the supply of precise location data. That makes the data you have more valuable, and demands more rigor in how it's sourced, processed, and delivered. CTV and in-housing are creating new identity challenges that only work when the underlying signals are verified and linkable across channels. Data quality won't be a differentiator; it'll be a baseline expectation. Forensic intelligence, data linkages, and verified behavioral signals are the mechanisms that separate trustworthy data from commodity coordinates. And AI, for all its promise, is an amplifier. It magnifies the quality of what you feed it, for better or worse.

Location intelligence sits at the intersection of all four. The companies that invest in curated, privacy-forward, analytically enriched location data are the ones that will build better audiences, measure more reliably, and actually trust what their AI tells them.

Continue the Conversation

Want to talk to the team about your advertising use cases or chat more about the subjects here? Start a conversation here.

More Blogs

Sort
No items found.

Book a Meeting

Meet with us and put Unacast’s data to the test.
bird's eye view of the city