Responsibility for AI Ethics is Shifting from Tech Roles to the Business Executives, says IBM Study

Responsibility for AI Ethics is Shifting from Tech Roles to the Business Executives_ says IBM Study
  • 80% of survey respondents pointed to non-technical executives as the primary advocates for AI ethics, compared with 15% in 2018
  • 79% of CEOs surveyed are prepared to implement AI ethics practices but fewer than a quarter of organisations have acted on them
  • 68% of organisations said diversity is important to mitigate AI bias but revealed their AI teams are 5.5 times less inclusive of women, 4 times less inclusive of LGBT+ staff and 1.7 times less racially inclusive than their overall workforce

A new IBM (NYSE: IBM) Institute for Business Value (IBV) study has revealed a fundamental global shift in the roles responsible for managing and maintaining AI ethics at an organisation. When asked who was primarily in charge of AI ethics, 80% of respondents identified a non-technical executive, such as a CEO, as the primary “champion”, a sharp increase from 15% in 2018.

The IBM study, “AI ethics in action: An enterprise guide to progressing trustworthy AI”, also shows that despite a growing requirement for developing trustworthy AI, including an improved performance in sustainability, social responsibility, diversity and inclusion, a gap remains between intentions and actions.

Across Europe, business executives are the driving force in AI ethics

  • CEOs are viewed as being most accountable for AI ethics by 30% of European respondents.
  • While 64% of European responses cite the CEO or other C-level executive as having a strong influence on their organisation’s ethics strategy, more than half also cite board directives (56%) and the shareholder community (55%).

Building trustworthy AI is perceived as a strategic differentiator and organisations are beginning to implement AI ethics mechanisms

  • More than three-quarters of global business leaders surveyed this year agree AI ethics is important to their organisations, up from about 50% in 2018.
  • Amongst European respondents, 73% believe ethics is a source of competitive differentiation and more than 60% of these respondents view AI and AI ethics as important in helping their organisations outperform their peers in sustainability, social responsibility, diversity and inclusion.
  • As a result, 52% of European respondents say their organisations have taken steps to embed AI ethics into their existing approach to business ethics.
  • More than 40% of European respondents say their organisations have created AI-specific ethics mechanisms, such as an AI project risk assessment framework and auditing/review process.

Ensuring ethical principles are embedded in AI solutions is an urgent need for organisations around the globe but progress is slow

  • More surveyed CEOs (79%) are now prepared to embed AI ethics into their AI practices than in 2018 (20%) and more than half of organisations have publicly endorsed common principles of AI ethics.
  • However, less than a quarter of responding organisations have instigated AI ethics and fewer than 20% of respondents strongly agreed that their organisation’s practices and actions match (or exceed) their stated principles and values.
  • 68% of surveyed organisations acknowledge that a diverse and inclusive workplace is important for mitigating AI bias but IBM’s findings indicate that AI teams are still substantially less diverse than their organisations’ workforces: 5.5 times less inclusive of women, 4 times less inclusive of LGBT+ individuals and 1.7 times less racially inclusive.

Also Read: Data Remediation – Why Enterprises Need it

“As many companies today use AI algorithms across their business, they potentially face increasing internal and external demands to design these algorithms to be fair, secured and trustworthy; yet, there has been little progress across the industry in embedding AI ethics into their practices,” said Jesus Mantas, Global Managing Partner, IBM Consulting. “Our IBV study findings demonstrate that building trustworthy AI is a business imperative and a societal expectation, not just a compliance issue. As such, companies can implement a governance model and embed ethical principles across the full AI life cycle.”

The time for companies to act is now. The data suggests that those organisations that have a broad AI ethics strategy interwoven throughout their business could gain a competitive advantage. The study provides recommended actions for business leaders, including:

  • Take a cross-functional, collaborative approach  ethical AI requires a holistic approach and a holistic set of skills across all stakeholders involved in the AI ethics process. C-Suite executives, designers, behavioural scientists, data scientists and AI engineers each have a distinct role to play in the trustworthy AI journey.
  • Establish both organisational and AI lifecycle governance to operationalise the discipline of AI ethics  take a holistic approach to incentivising, managing and governing AI solutions across the full AI lifecycle, from establishing the right culture to nurture AI responsibly, to practices and policies to products.
  • Reach beyond your organisation for partnership – expand your approach by identifying and engaging key AI-focused technology partners, academics, start-ups, and other ecosystem partners to establish “ethical interoperability”.


The IBV study, “AI ethics in action: An enterprise guide to progressing trustworthy AI”, surveyed 1,200 executives in 22 countries across 22 industries to understand where executives stand on the importance of AI ethics and how organisations are operationalising it. The study was conducted in cooperation with Oxford Economics in 2021. The full study is available at:

Check Out The New Enterprisetalk Podcast. For more such updates follow us on Google News Enterprisetalk News.

Previous articleLaiye Acquires Mindsay to Lead the Market Shift to Intelligent Automation
Next articleDoControl Secures $30 Million Series B to Redefine SaaS Data Security