05/12/2018
Introduction
No one can argue that Data and Technology brings more enjoyment and well-being into our lives. However, with any revolutionary change, there are always tradeoffs. In this article, we identify and explore a situations where the evolution of Big Data and AI could have impacts other than positive upon our everyday lives.
The dilemma of Data
Data is fascinating in itself, and the more of it we have, the more valuable it becomes. And what better evidence than the purchase of WhatsApp ($19 bln) in 2014, a firm who had never made a single $ in profit! The Climate Corporation (TCC) acquisition, a US crop insurer, by Monsanto Inc. for $1bln was equally as interesting because they didn’t buy it for the insurance business per se, but for the value of the data TCC held on US crop yields. Monsanto now uses this data to advise its growers (customers) when, where, and how to plant its seeds.
Beneath, however, lies a genuine concern for the rights of our privacy, our identity, and the trust in the companies we deal with.
Trust
Being able to predict human behaviour is the Holy Grail to faster sales and growth. For companies who can be the first to advertise a relevant product to you and even predict what you want before you do, win!
But how can we trust organisations to use our data ethically, fairly and transparently? Below we discuss 3 different dilemmas in the field of Health and Insurance.
1. Medical and Healthcare
If you are happy for your doctor to hold your personal medical records, are you equally as happy for an outsourced technology partner of the UK National Health Service (NHS) to hold the same?
At the outset, I’m not sure. On the other hand, imagine walking into a hospital and as you step through the sliding doors, the face recognition software analyses your features. Your medical records are accessed as you make your way down the corridor to the reception desk. You are asked to take a finger scan and the hologram then asks you a few questions. By the time you get to the reception, the nurse behind the counter either has an accurate diagnosis of your problem, in which case you are prescribed medicine and sent home, or you are directed to the appropriate specialist doctor on the relevant ward. This scenario is only a few years away, if we want it.
The advancement of medical science has even more exciting prospects. Take Genetic Predictive Testing (GPT). This technology allows individuals to provide DNA samples, and with reasonable certainty, you are given the likelihood and timing of developing a Cancer or other diseases such as Alzheimer's. You then have the chance to make the lifestyle changes you need, decades in advance. However, what are the consequences if your Life or Private Medical Insurer got hold of your diagnosis? Your premiums could shoot up or worst, you are priced out. Although, UK insurers are legally prohibited from asking questions about GPTs, it is possible, as we have seen, they could collect such information from social media platforms or by virtue of being the future owners of Yahoo or Gmail.
These are serious issues with societal consequences and are currently being debated at the highest levels in Government. It has important implications not just for individuals but for organisations such as the NHS who rely heavily on the Private Medical Insurance to relieve them of their overstretched resources.
2. Insurance modelling
Big Data is quickly turning the insurance model on its head. To date, insurance pricing has been based on a traditional 'pooling' mechanism. To explain, if everyone in the room wanted to buy home insurance, the insurer would count the number of people, predict the losses, and charge everyone an average premium. The pot of premium is hopefully enough to pay the unfortunate individual(s) who make a claim, the transaction costs, and a small amount for profit. Does this feel reasonable and fair?
Yes, but Big Data is now giving insurers the ability to predict individuals' behaviours with a good deal of accuracy and they can price the risk according to a person's lifestyle and habits. That is great if you are on the receiving end of a nice premium reduction; however, where are winners there are also losers. In the GPT example, there could certainly be losers and you could include young drivers as an example too.
Transparency
How do we feel about companies being able to control what we watch, the dates we go on, and the food we eat? How do we manage racial profiling or is it acceptable to influence how we vote in elections? Where do we draw the line?
There is no easy answer and it is even more problematic when countries have different cultures, political interests and standards. Take a mobile phone operator in Africa, for example. The many data rules and regulations enacted in the 1980's are still in force and restrict what Telcos can and cannot share. So, imagine a group of people carrying a contagious disease, moving from one place to another. The moral thing would be to alert authorities about the group’s whereabouts through its location data, directing them to the right areas, and allowing doctors to prevent further spreading. However, they can’t and if they break the rules they risk Fines and Legal action.
Who should own responsibility for regulating the ethics around data?
If we are to rely on business, we need leaders who have a good radar and a decent moral compass. If companies want our trust, they are going to have to radically improve standards of transparency. However, common sense tells us that companies owned by shareholders are steered towards maximising their profits, which means Government will need to have a big say in all of this. The challenge will be to bring together global cooperation in a new era of nationalist sentiment.