Understand the flip side of data personalisation
The personal information we give up in order to improve our experience can, just as easily, be weaponised against us
Every time an incident like Cambridge Analytica occurs, we feel compelled to regulate the hell out of the industry responsible. We believe none of this would have happened if we had the right laws in place and that now that everything’s gone pear-shaped, feel should double down and plug the loopholes that allowed the incident to occur.

But what exactly is it we need to regulate?
Modern tech companies are focused on taking the friction out of our daily lives. They do this by knowing us really well — understanding who we are and what we like — and not by employing armies of psychologists, but by developing really smart algorithms that are capable of sorting us into categories that optimally define us. This knowledge of who we are helps improve our experience of their service as it allows them to provide us features and services customised to our likes and preferences. This is what makes us keep us coming back to them — this user experience that magically gives us what we want without us asking for it.
Personalisation is at the core of almost every business today — whether it is the economy company that didn’t exist five years ago or the traditional brick and mortar enterprise that wants to use tech to more effectively deliver its products and services to us. Everyone wants to understand us better so they provide us what we want in a personally differentiated way. As users, this personal attention appeals to us because it takes the friction out of consumption. We like the fact that their algorithms do the heavy lifting for us, serving up recommendations of what we like without us having to work to find it.
And there are other, less evident, benefits to personalisation. As we understand our bodies better we have realised that customised treatment that caters to the specific ailments of our individual phenotype and personal microbiome is far more effective — and has potentially fewer side effects — than the broad spectrum treatment methodology that we’ve followed for centuries. Precision medicine is fast becoming a reality and as we couple our fast improving knowledge of our genome with our ability to constantly monitor personal parameters using wearable smart devices, it will not be long before our medications will be titrated daily to our individual requirements.
But there is a flip side to personalisation. As we allow the services we use to know more about us, others that we do not will as well. The personal information we give up in order to improve our experience can, just as easily, be weaponised against us, placing us in filter bubbles where the information we receive is limited and our access to necessary counterpoints to our staunchly held beliefs are censored. It will render us unusually vulnerable to social engineering — leaving us open to identity theft and all its accompanying financial and repetitional challenges.
This is the dilemma that is central to the regulation of data driven personalisation. If we allow it to proceed unchecked, we run the risk that nefarious elements will use this information to harm us — financially, reputationally and intellectually. If we ban it altogether or prevent businesses from learning about us, we deprive ourselves of many benefits that come from personalisation, and that are rarely in our interests to forgo.
Which is why we tend to regulate retrospectively, responding to the effects of the algorithm only after they are plainly visible to us. We design our regulations to prevent what just happened from happening again because regulators are ill-equipped to predict how new technologies will harm us until that harm is capable of being observed.
There is one entity that is capable of assessing the harm that an algorithm can potentially cause in advance of it actually occurring — the company that deploys it. If we can shift our regulatory focus to incentivise companies to take a broader view of the algorithms they design, forcing them to focus beyond the commercial benefits that accrue to them, but in addition to look to ensure that their algorithms do not accidentally expose users to harm, we will be able to more effectively regulate the data economy without depriving ourselves of the benefits it has to offer.
This approach will leave technology companies free to innovate, while setting out broadly articulated boundaries they should not cross. If designed well — with strong and effective punitive consequences — tech companies will be compelled, in their own interest, to look beyond narrow commercial gains and design their algorithms and business processes so that they cause no harm.
Where technology evolves rapidly and has ramifications that can span continents, no regulator will ever be able to prevent harm from happening. We should not expect them to. Instead, we need to design our laws to ensure that those who actually can prevent these consequences will.
This is the second in a series of articles on data privacy
Rahul Matthan is partner, Trilegal
The views expressed are personal