Breaking down common misconceptions in proximity & location marketing

July 19, 2017
Romet Kallas
Account Executive

The birth of customer centric marketing traces back to 1960’s when marketers began to focus from reaching the largest number of potential customers to reaching individual people in the most relevant and efficient way. While more than 50 years have passed, the marrying of scale and relevancy is still the golden formula that drives much of the innovation for many of the world’s top brands. Leading this innovation are the companies that process 500+ terabytes of online data a day and have consumer facing products that you’re fighting the urge not to use while reading this article. The rise of automation and walled gardens have also significantly moved the marketing industry closer to the golden formula. For those who don’t have the luxury of owning large portions of 1st party data another valuable toolset has emerged: proximity and location data.

2017 has been already been a spectacular year for proximity and location data vendors. Marketers have started to realize the importance of understanding customer’s visitation patterns and how they can leverage it to replace the need of having access to sign-in data. That is why a lot of agencies and demand side platforms that have built or started to build out their own Data Management Platforms, prioritize location data in their data acquisition strategies. Even though the market has come a long way in understanding real-world behavior, there are still many misconceptions and questions around the concept. We want to help break some of the most common of those down for you.

1. Proximity and location data are the same thing

This is number one for obvious reasons, people use the two terms pretty interchangeablyeven though they are quite nuanced. Think of general location data as a macro level and proximity as a micro level view of a customer. Usually with location data you are able to tell that a device has been in a certain area vs. proximity which is a more granular form of location data and enable to pinpoint a device to a specific venue. In order to increase the accuracy, technologies such as beacons, Wi-Fi, NFC and data points such as dwell time, check-in, Point of Sale might be used.

The breakdown:

As a marketer don’t get too caught up in the semantics, all you should care about is the accuracy. Focus on these questions: Is the provider you are looking to acquire data from able to give you granular level data with context? In addition, are they able to provide you transparency into the confidence level of each signal?

2. Proximity and location data is all sourced the same way

Most of the location data available on public exchanges is sourced from bid streams. This means when an advertiser is willing to spend more money on an ad request, the publisher can send back the GPS location of the device when the ad was served. However, there are a lot of other channels that data providers can use to source data.

The breakdown:

Firstly, the efficacy of each data source is truly reliant on your company’s use cases and goals. That said, an SDK (Software Development Kit) is a piece of code built into app(s) that can continuously collect data points from beacons, wi-fi, NFC as well as background or foreground location signals.

3. All GPS data is the same

While there is a lot of scale in collecting GPS data via bid streams, there are some concerning accuracy problems with this method. Data doesn’t come with enough detail (there are too few interactions) or the lat/lon passed is cached and coordinates are no longer relevant. In addition, data requests can be delayed by a bad signal forwarding the wrong coordinates. All culminating in data that is mislabeled or simply fraudulent.

The breakdown:

Unfiltered location data from bid stream  is now considered to be unreliable and, for a non-location company, very difficult to validate. GPS data sourced from an SDK with a sufficient amount of interactions is more reliable and lends itself much easier to validation. There is no bad data, but depending on where it is sourced and if it has been filtered can make a big difference depending on a company's objectives.

4. Proximity and location tech violates privacy

This is a sensitive topic and and rightfully so, individuals have a right to their privacy and it is our job as an industry to ensure that. While there have been great strides in protecting PII (Personally Identifiable Information) there have been some players that have toyed with the lines of those boundaries. This has inspired the mobility of larger initiatives to protect people's privacy. Apple, for example has taken a stand in their upcoming iOS with the “blue bar” update. When apps are collecting location data, users will see a blue bar on top of their screen. In addition, previously when asking permission for using location, app developers had the opportunity to provide “always” and “never”. In the new iOS another mandatory option will be introduced “only while using”.

The breakdown:

Understanding the data source is important but not enough. While it’s understood that users are opting-in to share their location, when acquiring data, understanding how user consent is acquired is crucial to moving forward in privacy practices as an industry.

5. Location data = retargeting

It’s true that the lowest hanging fruit with location data is retargeting devices that have been into specific venues. Having such capabilities have proven to increase the return on investment of a campaign significantly. However, there are cases (such as with a  CPG (Consumer packaged goods) brand) where targeting devices that go into big-box retailers might not be sufficient without further insights into the customer shopping habits.

The breakdown:

Companies that have the data science capabilities have started to leverage proximity and location data in a few interesting ways:

  • Attribution - matching ad impressions to store visitations and measuring the success of a campaign.
  • Audience modeling - making custom segments based on visitation patterns. Looking other devices with similar habits that match with existing clients to identify new ones.
  • Cross-device - Many device graphs are currently highly dependant on IP address, making it a challenge to separate devices in a household, working space or public networks. Looking at visitation patterns one can start to filter out false negatives and enhance the accuracy of the graph.
  • Data verification - As the market is full of fraudulent and inaccurate data, the deterministic methods behind most proximity data can be used to verify existing segmentation.
  • Predictive analytics - Using previous customer visitation patterns to predict where they are going next and when
  • Foot traffic analytics - Understanding foot traffic into certain locations and venues has immensive value for the financial industry as well as brick & mortar stores to carefully pick new potential store locations

Macro level location data has been around for many years now, now with the arrival of proximity technology, we are that much closer to the golden formula of relevancy and scale, and solving for the consumer centric approach. Apps like Facebook’s Messenger, Instagram, and Twitter have started to collect foreground location signals and companies are doubling down on their investments in proximity and location technologies (look no further than the recent Snap acquisition of Placed). We predict it is only a matter of time before we hear about LinkedIn’s or Amazon’s proximity and location (acquisition) strategy.  

If you enjoyed this post, join us next week for our webinar hosted in partnership with the LSA: “Busting 5 Myths about Proximity and Location-Data Marketing” where will will discuss these 5 misconceptions in more detail.

Register HERE.