In regulating surveillance pricing we cannot fall for the common pitfalls: relying exclusively on consent, making exceptions for subscription-based contracts and failing to consider de-identified data.
The NDP’s sudden push to ban algorithmic pricing can make the tactic feel like it came out of nowhere. But companies have long been using a variety of data-driven inputs to calibrate prices.
In digital marketplaces, the norm of a stable posted price is becoming less universal: retailers and platforms increasingly have the technical ability to show different people totally different prices, discounts, or product rankings for the same goods, based on data such as location, browsing behaviour, purchase history, device type, or other inferred characteristics.
This shift started through geographic pricing, which slightly adjusted a product’s price based on the buyer’s location. This kind of market partitioning relies on information about the kind of people that live in a particular neighbourhood, and it’s been documented with Staples, Home Depot, Expedia, McDonald’s and other firms. A price can now also change based on your proximity to a store: a 2025 study found that Target charged more for a TV in its app when a customer was near a Target store than when the same consumer was farther away.
Instead of discriminating based on where the potential customer is, computational systems now calibrate pricing based on who they think you are.
In 2020, Tim Horton’s was investigated by the Privacy Commissioner of Canada for tracking everywhere users of its mobile app rewards system went — particularly, when they went to a competitor’s location — in order to tailor promotional and marketing materials. The “price” of the discount that was being offered as an incentive was directly informed by where someone was.
While e-commerce and digital marketing have brought about unprecedented levels of consumer empowerment and autonomy—like the convenience of pre-ordering your coffee—they have simultaneously subjected us to a level of control and manipulation over our decision-making that we are only just beginning to grasp.
Firms conduct this mundane surveillance across various online behaviours and activities. That means that personalized algorithmic pricing is less reliant on just where you are in a physical context, and more about where you are socioeconomically.
Even just your purchasing history can show a lot about you. As early as 2002, a Canadian Tire executive discovered that customers who purchased items like protective felt pads for furniture legs were more responsible and paid credit card bills on time, while customers who purchased skull accessories were more likely to miss payments. Inferences like these could then be leveraged to determine what credit card limit you were offered.
Surveillance pricing is a fitting name for the worst examples of these pricing strategies, given they are only possible due to the large-scale corporate surveillance operations these firms are able to undertake and the massive amounts of data generated.
You do not need to look very far to get the picture of what kind of data is at stake. Loblaw’s own privacy policy discloses that the company may use a range of technologies to collect your information both online and in physical stores, including anonymous video analytics or data management platforms, for a range of vague purposes, such as enhancing the customer experience or tailoring offers. In some cases, as the policy states, your consent to the collection of personal information may simply be “implied.”
Loblaw reassures us that, wherever possible, personal information is aggregated, anonymized or de-identified when it is used for data analytics or research purposes. Yet, de-identified data can still enable incredibly precise consumer segmentation and tailored pricing, even if a firm never knows exactly who the individual consumer is.
Currently, de-identified data is not covered by PIPEDA, which means firms effectively have a hall pass to continue using it to enable exploitative pricing practices. Privacy law only conceptualizes individual consumers as having rights, while targeted segments of consumers do not, and therefore only personal identifiable information presents a risk.
Québec’s Law-25 gets at this by classifying de-identified data as personal information that is therefore still subject to stricter requirements, unless the data has been irreversibly anonymized. The law might also offer greater protections against surveillance pricing, but it has not been tested yet.
The other problem in our privacy law is consent: can you meaningfully consent to the scale of the data collection and analysis that facilitates surveillance pricing—especially when firms may decide that your consent is “implied”? Indeed, a bill that recently passed in Maryland purported to ban personalized pricing, but made a problematic exception to allow it so long as an individual consents.
Maryland is only the latest state to take or consider legislative action against personalized pricing. There have been an onslaught of bills seeking to ban the practice, from retail to rental or labour markets in the U.S.. However, many of these bills have made another worrisome carveout: subscription-based contracts.
Take New York’s Algorithmic Pricing Disclosure Act, effective as of November 2025: it requires any retailer using personalized algorithmic pricing to prominently disclose that the price you are seeing was set by an algorithm using your personal data. Among its exceptions are prices offered to consumers that have existing subscription-based contracts or agreements. There are similar provisions in Maryland’s bill and a Tennessee bill; Colorado’s bill even explicitly mentions loyalty programs.
Such carveouts are likely to include most loyalty or rewards programs offered today—even your Amazon prime membership—and therefore provide a lot of leeway for firms to continue or adopt surveillance pricing strategies.
As they become increasingly commonplace, loyalty programs encourage and normalize “mundane” corporate surveillance, where firms gobble up all the information about you they possibly can and get to say it is for your good. They are effectively a trojan horse that uses the promise of generous benefits to trap you into a system of surveillance and extraction. They are also the perfect microcosm of the future of pricing: increasingly personalized and unknowable.
In regulating surveillance pricing we cannot fall for the common pitfalls: relying exclusively on consent, making exceptions for subscription-based contracts and failing to consider de-identified data.
Vass Bednar is the Managing Director of the Canadian Shield Institute and co-author of The Big Fix. Emily Osborne is a Policy Research Associate at the Canadian Shield Institute and a Fellow with the Canadian Anti-Monopoly Project.
The views, opinions and positions expressed by all iPolitics columnists and contributors are the author’s alone. They do not inherently or expressly reflect the views, opinions and/or positions of iPolitics.







