Why addressing ethical questions in AI will benefit organizations
The ethical use of AI is becoming fundamental to winning people’s trust, a new study from the Capgemini Research Institute has found.
As organizations progress to harness the benefits of AI, consumers, employees and citizens are watching closely and are ready to reward or punish behaviour. Those surveyed said that they would be more loyal to, purchase more from, or be an advocate for organizations whose AI interactions are deemed ethical.
Artificial intelligence may radically change the world we live in, but it is the ethics behind it that will determine what that world will look like. Consumers seem to know or sense this, and increasingly demand ethical behaviour from AI systems of organizations they interact with. But are organizations prepared to answer the call?
Ethical AI is the cornerstone upon which customer trust and loyalty are built
In the new report from the Capgemini Research Institute, Why addressing ethical questions in AI will benefit organizations, the Institute surveyed 1,580 executives in 510 organizations and over 4,400 consumers internationally, to find out how consumers view ethics and the transparency of their AI-enabled interactions and what organizations are doing to allay their concerns. We found that:
- Ethics drive consumer trust and satisfaction. In fact, organizations that are seen as using AI ethically enjoy a 44-point NPS® advantage compared to those seen as not using AI ethically.
- Among consumers surveyed, 62% said they would place higher trust in a company whose AI interactions they perceived as ethical; 61% said they would share positive experiences with friends and family.
- Executives in nine out of ten organizations believe that ethical issues have resulted from the use of AI systems over the last 2-3 years, with examples such as the collection of personal patient data without consent in healthcare, and over-reliance on machine-led decisions without disclosure in banking and insurance. Additionally, almost half of consumers surveyed (47%) believe they have experienced at least two types of uses of AI that resulted in ethical issues in the last 2-3 years. At the same time, over three-quarters of consumers expect new regulations on the use of AI.
- Organizations are starting to realize the importance of ethical AI: 51% of executives consider that it is important to ensure that AI systems are ethical and transparent.
How to address ethical questions in AI?
In the given scenario, can organizations work towards building AI systems ethically? The findings suggest that organizations trying to focus on ethics in AI must take a targeted approach to make systems fit for purpose. Capgemini recommends a three-pronged approach to build a strategy for ethics in AI that embraces all key stakeholders:
- For CXOs, business leaders and those with a remit for trust and ethics: Establish a strong foundation with a strategy and code of conduct for ethical AI; develop policies that define acceptable practices for the workforce and AI applications; create ethics governance structures and ensure accountability for AI systems; and build diverse teams to ensure sensitivity towards the full spectrum of ethical issues
- For the customer and employee-facing teams, such as HR, marketing, communications and customer service: Ensure ethical usage of AI application; educate and inform users to build trust in AI systems; empower users with more control and the ability to seek recourse, and proactively communicate on AI issues internally and externally to build trust
- For AI, data and IT leaders and their teams: Make AI systems transparent and understandable to gain users’ trust; practice good data management and mitigate potential biases in data, and use technology tools to build ethics in AI.
Clearly, AI will recast the relationship between consumers and organizations, but this relationship will only be as strong as the ethics behind it.
For the full findings, case studies, and advice get your copy of the report by filling in the form below: