Google had planned to kill Chrome’s devilish little tracking cookies by now. But that hasn’t happened; it has been hit by delay after delay. Google’s latest update suggests—but does not assure—that so-called “cookie deprecation” will proceed “starting early next year.” That’s 200 days from now. It means at least 200 more days of the kind of Chrome user tracking that should have been banished years ago.

When those cookies go, they’ll be replaced with something Google says will “both protect people’s privacy online and give companies and developers tools to build thriving digital businesses.” This so-called Privacy Sandbox “reduces cross-site and cross-app tracking while helping to keep online content and services free for all.”

But the new risk for users is that those cookies might actually give way to something much worse from a privacy, security and tracking perspective.

Right now, Google is caught in a painful back and forth with regulators over how to kill tracking cookies without also killing the entire online marketing industry that relies on them. This Privacy Sandbox—put at its simplest—collects users into likeminded groups that can be served up to advertisers for targeted content rather than have them build or buy individual profiles based on digital fingerprinting.

And so, 200-plus days from now, the hope is that cookies go and something else is used instead. But there’s much to be done between Google, regulators—especially the UK’s Competition and Markets Authority (CMA), and advocates for the rest of the industry, which has argued that Google will tilt the playing field in its favor, further consolidating its position as the world’s most valuable marketing machine.

But before we celebrate any bright new future, there’s now a catch—and it’s a big one. AI. We know that search is likely to be reinvented by AI-based search, itself a threat to Google’s dominance. And we also know that Google, unsurprisingly, is retrofitting AI across its suite of platforms and services. Chrome is no different.

As reported by Android Police, “we spotted the first definitive signs of [Google] adding in a new feature potentially powered by AI to make your browsing history more searchable. Alongside recent confirmation of AI’s involvement comes an entire debate about how much of our usage history we want AI to see and learn from?”

As I have already commented, when it comes to Messages and Gmail, Google’s proposals to search user data archives to sharpen the context for its generative AI and tighten the parameters around its marketing machine carry grave privacy risks. This is made worse because AI privacy policies are complex and are being ignored by an excited user base with more of a new toy overload than a spoiled kid at Christmas.

As Android Police warns, “with great convenience comes great concern… just like cloud backups, which involve entrusting a random server on the internet with your cherished memories, involving AI to make your history searchable comes with the similar privacy risk of tech companies potentially honing AI models on your data.”

They’re absolutely right, this is a one-way street we need to consider carefully now, and not leave until it’s too late. The site reports a disclaimer within the beta code “stating Google and its human reviewers may have access to your data. Specifically, the company says it will collect your History search terms page content of the best matches, and the generated model outputs.” Take note.

Google will no doubt offer a range of tweaks and opt-outs as well as a revised privacy policy and in-app notifications when this AI fully hits. But most of this will be ignored. No-one has a grip yet on the implications from all this fast change, not the industry and certainly not its regulators. An urgent game of catch-up is required.

If the future prospect of Google’s AI storing and processing your entire search history isn’t thought provoking enough, the company’s Privacy Sandbox has just taken another very public hit that might give Chrome’s users a more immediate concern.

With echoes of Google’s ironic interpretation of “incognito,” a European privacy advocacy organization has just filed a claim with regulators, alleging that Chrome’s users have been gradually tricked into enabling a supposed ‘ad privacy feature’ that actually tracks people. While the so-called ‘Privacy Sandbox’ is advertised as an improvement over extremely invasive third-party tracking, the tracking is now simply done within the browser by Google itself.”

Noyb has taken its complaint to the Austrian GDPR regulator, arguing that to do this, Google “theoretically needs the same informed consent from users. Instead, Google is tricking people by pretending to ‘Turn on an ad privacy feature’.”

The base of the claim is Google’s use of words and how those words are presented, that it says “trick” users into agreeing something they believe to be private but, it argues, is actually anything but.

For its part, Google has responded (courtesy of statement given to The Register) that “this complaint fails to recognize the significant privacy protections we’ve built into the Privacy Sandbox APIs, including the Topics API, and the meaningful privacy improvement they provide over today’s technologies, including third-party cookies.”

The Topics API is that initiative to anonymize groups of like-minded users based on their likes and areas of interest, and serve up that information to advertisers, preventing any specific individuals from being identified or followed.

“Privacy Sandbox,” Google says, “is designed to improve user privacy and provide the industry with privacy-preserving alternatives to cross-site tracking. We’ve been closely engaging with privacy and competition regulators globally, and will continue to do that to reach a balanced outcome that works for users and the entire ecosystem.” The challenge is that a “balanced outcome” still seems far from reach.

I have approached Google for any comment on Noyb’s claims and its AI plans.

All very confusing for Chrome’s users, no doubt. And to be fair, there is a lot happening at once and it’s hard to keep up. Meanwhile, the irony for its 3-billion-plus users is that even the promised land of a cookie-free future will be less private than other browsers offer today—Safari, Firefox, Brave, DuckDuckGo…

That’s for now, at least. But AI will have an as yet unknown impact across the board. The industry and its regulators need to provide guidance and a set of standards. Maybe a traffic light system for tracking, data storage, model training, human review. Users will not continually monitor privacy policies, if they ever read them at all.

None of this was in evidence when Google first promised to kill off tracking cookies—but it is now. Before we get to that proposed start date in early 2025, 200 days from now, we need to refresh the debate. Otherwise what we get risks being much worse.

Share.

Leave A Reply

Exit mobile version