Insurance has always relied on data. And with the mainstreaming now of revolutionary techniques for how data can be acquired, analysed and shared, the sector is investing heavily in new ways of gaining greater levels of insight about its customers. The way in which this is being done is significant – patterns that form now will become habits that will shape the future. We are therefore in a critical window as mass adoption of ‘big data‘ begins, in which we can develop ethical principles that capture the promise that big data holds without losing sight of important social values.
Neil Richards and Jonathan King sum it up well in this excellent paper:
“If we fail to balance the human values that we care about, like privacy, confidentiality, transparency, identity and free choice with the compelling uses of big data, our Big Data Society risks abandoning these values for the sake of innovation and expediency.”
The insurance sector needs to build structures that encourage the ethical use of data. If this doesn’t happen, there’s a big risk that consumers will loose trust in the very services that insurers have invested so much in big data to deliver. Some of the technology pioneers have recently voiced such concerns in relation of the revelations of Edward Snowden, in an apparent step away from earlier pronouncements that privacy was ‘a historical anomaly’.
There’s no doubt that big data will transform our lives and bring immense benefits to society: medical researchers will enter new universes of understanding and individuals will gain immense feedback about how they live their lives, to name but two. The language being used is however often too techno-centric, phrased in analysis and devices. This tends to hide the nature of the key contribution that big data will make to sectors like insurance, which is through awareness that builds more creative relationships and creates mutual value.
The language can also be tinged with an element of inevitability, in particular in relation to big data’s influence on privacy. “Privacy is dead; get over it” is an oft quoted phrase that tends to frame the dialogue over data and values in a somewhat binary manner. This sees information as public or private; known to us alone or broadcast to the world. This belies the reality that private data often exists in an intermediate state, held in trust with the expectation that it will be kept confidential and only used for the purpose for which it was disclosed. Think of the information that your doctor or lawyer holds about you. Is insurance any different?
Privacy is very much alive (think of Edward Snowden and the NSA), but it is in a state of flux. Privacy as information we keep secret or unknown is definitely shrinking. We are living through an information revolution, and the collection, use, and analysis of personal data is inevitable. Richards and King again:
“if we think about privacy as the question of what rules should govern the use of personal information, then privacy has never been more alive. In fact, it is one of the most important and most vital issues we face as a society today. … We have some privacy rules to govern existing flows of personal information, but we lack rules to govern new flows, new uses, and new decisions based upon that data. What we need are new rules to regulate the [social] costs of our new tools without sacrificing their undeniable benefits.”
Richards and King are talking about ‘new rules’ in their widest, social sense: legislation overlaid with ethical principles that allow firms to explore the integration of privacy protection into their specific business models and the products and services coming out of it.
So what sort of ethical principles should a typical insurance firm be thinking about? This I’ll explore shortly in another post, in the form of 11 principles that provide individual check points on the integration of ethical issues into how your firm obtains, processes and shares data.