Malcolm Crompton continues his look at the future of data processing policies and how businesses can win the trust of their customers through improved services.
In my last post , I alluded to Microsoft’s Trustworthy Computing Next paper and spoke about the need to rebalance the focus in privacy protection focus between user control of data collection and what firms are actually doing with that information so that there is an increased focus on the latter without ever denying the importance of the former.
Neither Scott Charney nor I am saying user control is useless or unimportant – rather, it is a necessary but not sufficient step in protecting privacy. In this post I’d like to share some ideas as to how we can tackle the challenge of data usage, borrowing from the field of medical ethics.
Intrigued? Surprised? Confused? Read on.
The new digital landscape
Firstly, to get a better idea of where we stand it is instructive to look at the state of our network society today. As the Microsoft paper aptly observes, we now live in a data-centric world. The Internet has transformed from a linear platform, where users accessed content in a client-server architecture, to a highly-interconnected one where users, devices and applications interact with each other. More data is being generated than ever before – in fact, it is doubling every two years.
Big Data is the hot issue of the moment worldwide , and it’s going to have a profound impact on the way we deal with information and, in turn, our privacy. As the journalist and technology commentator Kenneth Neil Cukier put it : “it refers to things that we cannot see with the naked eye, but is only revealed to us from a huge body of data.” What sorts of things can it reveal? Well, literally anything. By trawling through their immense collections of data, companies can, for example:
It is in this context that the insufficiency of user control can be seen: we just don’t have the ability (or the option) of controlling every piece of information that we give out, let alone where formal privacy policies are offered, as research by the CyLab at Carnegie Mellon University has demonstrated . Every time we open an app, click on a link or “like” a comment – things that we do willingly and voluntarily – a tiny morsel of information is logged about us. Furthermore, it is not clear that restricting collection is in the public interest. For example, the EU Commission in its new Regulation has proposed that geo-location data be considered ‘personal data’, with its incumbent rules. However, handled clumsily, such a formulation may have a chilling effect on the innovative and beneficial applications of geo-location data.
As Cukier astutely notes above, we must separate the process of Big Data from its output. There is a danger that in the well-meaning push for better privacy protection we make inflexible rules on process (e.g. consent, purpose, data minimisation) which leads to friction with potential uses of the information for the broader public benefit. What’re missing are the rules that allow us to do the latter. This is where the lead from the way medical ethics has addressed similar issues might throw some light.
Rules of Play
Physicians deal with fundamentally private aspects of individuals’ lives, including our bodies and our personal histories. As such, they have special ethical and legal duties. Medical ethics consists broadly of the following principles:
- Autonomy – the patient has the right to refuse or choose their treatment.
- Beneficence – a medical practitioner should act in the best interest of the patient.
- Non-maleficence – “first, do no harm”.
- Justice – concerns the distribution of scarce health resources, and the decision of who gets what treatment (fairness and equality).
- Respect for persons – the patient (and the person treating the patient) have the right to be treated with dignity.
- Truthfulness and honesty - the concept of informed consent has increased in importance since the historical events such as the Doctors' Trial during the Nuremberg trials and Tuskegee syphilis experiment.
This is why we can confidently reveal to our doctor intimate details about our lives, safe in the knowledge that they will not be divulged in the course of dinner table conversation. The issue is not what the doctor knows about us, it’s about what they do once they do know.
Does this insight into the treatment of medical treatment have any insight into how we think about the appropriate approach to handling personal information?
Today our online and offline worlds are becoming enmeshed. Many important transactions in our lives – health, banking, insurance – are now being conducted online. We are also sharing more of our lives online than ever before. Of course all this information is being dutifully collected. How do the above principles apply to companies’ use of our information?
- Autonomy – Companies should respect the privacy and choices of the individual.
- Justice – Companies should act fairly. They should be transparent in their dealings and responsive to the individual. They must take responsibility and provide restitution when something goes wrong.
- Beneficence – While strictly speaking companies are out to make a profit, their use of the information should provide at least some sort of benefit (ultimately) for consumers.
- Respect – Companies have never ‘owned the customer’ and never will.
- Nonmaleficence – Beyond obvious misuse, companies must also be careful to prevent unintended exposure or discrimination through its use of information, as well as making individual feel respected and assured.
As with all things in life, balancing the various factors won’t be easy. A key consideration will be reversibility. Just as in the medical context the surgeon must think carefully before amputating a limb, there must be extra precautions when considering the use of sensitive personal information.
Borrowing an example from the medical context, ethics boards are frequently used to determine the appropriateness and efficacy of a procedure. Should a similar process be instituted within a big data collector/user to decide when these conditions are met?
None of this means that ‘notice and choice’ disappears. But it does mean that there might be a way in which it is offered selectively in situations where it matters.
Am I naive to think that self-interested companies would willingly conform to principles like ‘justice’ and ‘beneficence’? Perhaps. However, I would argue that it is a smart business move. For too long the burden has fallen on the individual – to learn about a company’s data processing policies, to decide whether to share information and to pick up the pieces when things go awry. If, instead, companies were transparent, accountable and allocated risks fairly, this would be a game changer.
At the end of the day it comes down to trust. We share our sensitive information with doctors because we trust them. Imagine the benefits if we trusted data users more – stronger information flows, new and improved services, more technological innovations and yes, higher profits for companies. It is time we looked closely at data usage, and in particular, the rules of data usage and how to generate that trust.
Malcolm Crompton is Managing Director of Information Integrity Solutions (IIS), a globally connected company that works with public sector and private sector organisations to help them build customer trust through respect for the customer and their personal information. He was also foundation President of the International Association of Privacy Professionals, Australia New Zealand, www.iappANZ.org.