Blog 2: An Economic Case for ethical data use
The combination of exponential growth in data volumes, the interconnectedness of this data (social/special/purchasing), and the subsequent ability to drive commercial insights through advancing technologies such as AI and Machine Learning, presents an undeniably exciting opportunity for businesses in 2021 and beyond. What our first blog in the Data Ethics series highlights however, is the inherent risks that a lack of control over our data presents, in the form of cyberattacks and/or misuse of AI and the role data ethics needs to play in order to mitigate them.
In this article we’ll review the business case for ethical data controls by exploring changing sentiments amongst those stakeholders integral to commercial survival, as well as the potential repercussions of inaction.
There are two parties that are instrumental in driving business change. First and foremost is the customer. Businesses must adapt to meet the evolving demands/needs of the customer to ensure they remain competitive. The second group are the regulators, by which we mean those that have an ability to assert an element of control over an organisation. Governing bodies such as the Financial Conduct Authority (FCA) are an obvious example from a compliance perspective, but the power of investment companies are also something that should not be underestimated. Below, we will discuss the importance of these stakeholders in the context of data ethics.
The consumers care
Businesses that best align their values to those of their customers are the most likely to capture market share and maintain consumer retention, both of which have direct impacts on a company’s revenues. In fact, Bain & Co estimate that a 5% increase in customer retention can increase profitability by as much as 25% due to the recurring revenue throughout the customer lifecycle. Ethical use of data is one of these customer values, and a key one at that. The Edelman Trust Barometer emphasizes that ethical drivers are three times more important to company trust than competence.
With the benefits of building trust clear it’s also worth highlighting the necessity to maintain it. According to The Edelman Trust Survey, 45% of consumers would never trust a brand again after it displayed unethical behaviour or is involved in a scandal, and 40% say they’d stop buying from that brand altogether. Having an ethical framework within a business not only helps build this trust but also establishes organizational layers of defense to lower the risks losing it.
It’s also worth noting that according to the same survey, 81% of employees expect their CEOs to speak out about the ethical use of tech. With this in mind, a company whose values do not resonate with those of their employees or even betrays them through unethical use of data is likely to suffer from higher rates of employee dis-satisfaction and staff turn-over, which ultimately results in reduced productivity and inhibits growth.
Regulatory intervention is inevitable
To better predict how regulators may clamp down on unethical uses of data it’s worth exploring their historical approaches to the introduction of new controls. The first can be categorised as a reaction to some form of ‘black swan’ event, such as the 2008 financial crash, which can be defined by significant new regulation around how companies can operate, introduced over a very short period. Whilst we are yet to see this in the data ethics space, the potential for it to occur should not be underestimated.
The route we’re more likely to see, and are already seeing however, is the gradual introduction of controls in response to a combination of shifting customer sentiment and the cumulative negative impacts of numerous smaller events. As we discussed in our first blog in the series, the reaction we’ve seen in response to climate change is a good example of this. In either case these examples are reactive as opposed to proactive, meaning damage needs to be incurred before change is institutionalised; which is clearly a bad approach for business.
At DTSQUARED we predict that investment firms will ultimately demand a defined level of ethical maturity when it comes to data, similar to the demands being more commonly made in the ESG sector. For example, Climate Action 100+ is an investor led initiative, controlling $52 trillion in assets, putting pressure on companies to hit reduced emissions targets.
When it comes to power plays by the governing bodies, 2021 will be the year where regulation of AI is set to take centre stage, and data ethics is at the heart of it. The European Commission is aiming to publish fresh legislation covering the safety, liability, fundamental rights and data aspects of AI in the first quarter of this year, with the UK government publicly expressing interest in this type of reform; although it’s unlikely anything will be passed into law for another couple of years.
Recent publications by policymakers and lawmakers offer us clues as to what can be expected in terms of data ethics regulations. In July 2020 the European Commission’s Inception Impact Assessment (IIA) was weighted heavily towards ethical principles citing developments relating to the need for human-centric AI controls, safeguards against bias and discrimination, and data protection.
In the UK, ‘The Centre for Data Ethics and Innovation’ have proposed a roadmap on bias algorithmic decision making for government, regulators, and industry with the aim of supporting “responsible innovation”. The AI Council and House of Lords Liaison Committee have both published recommendations to help the UK government develop a national AI strategy, calling for a “cross-sector ethical code of conduct”.
What’s the potential cost of inaction?
Even by ignoring the shifting tides we’re seeing in the regulatory space there are clear indications from the private sector which suggest that companies who choose to bury their heads in the sand with data ethics risk getting left behind. The Enterprise Data Management (EDM) Council recently announced that the top two requests from organisations are around Advanced Analytics and Data Ethics. Combine this with the fact that 42% of CDO’s said Data Ethics is now part of their role, something that hadn’t even appeared on previous surveys in 2015/2017, it’s clear that many companies are being proactive. This reflects the sentiment of the general public, three-quarters of whom expect CEO’s to drive industry change rather than waiting for it to be imposed on them by governing bodies. In terms of future trends, Gartner also predicts that in 5-10 years data ethics will be an integral part of customer marketing to ensure trust is built and maintained.
Whilst a lack of competitiveness is one aspect, companies should also consider the likely repercussions of what happens if they lose control. Failures of internal policy and lines of defence around data can manifest in both data breaches and/or discriminative bias in AI algorithms towards the customer; both of which carry the risk of regulatory fines not to mention the damage to brand image and loss of trust.
Security breaches have increased by 11% since 2018 and 67% since 2014 with the average one costing $3.86 million as of 2020. Case studies of unethical use of algorithms also seem to be becoming more prevalent; a recent topical example being the algorithms auto-grading school exam results over COVID disproportionately awarded higher marks to private school students.
These trends highlight that both private and governmental bodies are already struggling to effectively control their data. When we combine this with the fact that data volumes are expected to triple in the next 5 years and the proliferation of advanced analytics and AI in companies of all sizes set to continue, it’s clear that the current controls are not sufficient enough to protect customers. A data ethics framework looks to combat these emerging risks by establishing additional layers of defence, thereby reducing the likelihood of damaging the relationship with customers. With increasing consumer sensitivity and awareness of how companies use their data, a trend which is set to continue, the practice of organizational data ethics should not be seen as an economic burden. On the contrary, it represents a commercial opportunity to build trust amongst both prospective and existing customers, ensures you have a head-start on the regulation that’s expected, and reduces the potential for brand damaging data decisions.
Join us for the third and final part of our Data Ethics blog series where we’ll be discussing how to incorporate data ethics into your business strategy and build digital trust with your customers. Sign up here to get the article straight to your inbox and if you can’t wait until then, please get in touch and we’d be happy to set up a tailored complimentary session.