Responsible AI - a top management issue

While 84% of global executives believe responsible AI (RAI) should be on top management agendas, only 25% have comprehensive RAI programs in place, as shown in a joint study published today by MIT Sloan Management Review (MIT SMR) and Boston Consulting Group (BCG).

  • 1 year ago Posted in

The report, To Be a Responsible AI Leader, Focus on Being Responsible, was conducted to assess the degree to which organizations are addressing RAI. It is based on a global survey of 1,093 executives from organizations grossing over $100 million annually, from 22 industries and 96 countries, as well as insights gathered from an international panel of more than 25 AI experts. A popular term in media and business, RAI is defined by MIT SMR and BCG as “a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact.”

Nearly a quarter of survey respondents report that their organization had experienced an AI failure, ranging from mere lapses in technical performance to outcomes that put individuals and communities at risk. RAI initiatives seek to address the technology’s risks by proactively addressing the impact on people. Despite the clear necessity for RAI, less than one-quarter of organizations have a fully implemented program.

“Our research reveals a gap between aspirations and reality when it comes to responsible AI, but that gap also presents an opportunity for organizations to become leaders on this issue,” said Elizabeth M. Renieris, a senior research associate at Oxford’s Institute for Ethics in AI, an MIT SMR guest editor, and a coauthor of the report. “By taking a more expansive view of their stakeholders and viewing RAI as an expression of their deeper corporate culture and values, organizations stand better equipped to ensure that their AI systems promote individual and societal welfare.”

“As organizations rush to adopt AI, it can bring with it unintended risks to individuals and communities, highlighting the critical importance of operationalizing responsible practices,” said Steven Mills, global GAMMA chief AI ethics officer at BCG and a coauthor of the report. “True leaders in RAI are, at their core, responsible businesses. For these frontrunners, RAI is less about focusing on a particular technology and instead is a natural extension of their purpose-driven culture and focus on corporate responsibility.”

How Industry Stakeholders in Africa and China Adopt RAI

BCG and MIT SMR conducted dedicated surveys in Africa and China to understand how industry stakeholders in these key geographies approach RAI. Most respondents in Africa (74%) agree that RAI is on their top management agendas, and 69% agree that their organizations are prepared to address emerging AI-related requirements and regulations. In Africa, 55% of respondents report that their organizations’ RAI efforts have been underway for a year or less (with 45% at 6 to 12 months, and 10% at less than six months). In China, 63% of respondents agree that RAI is a top management agenda item, and the same percentage agree that their organizations are prepared to address emerging AI requirements and regulations. China appears to have longer-standing efforts around RAI, with respondents reporting that their organizations have focused on RAI for one to three years (39%) or more than five years (20%).

Responsible AI Initiatives Often Lag Behind Strategic AI Priorities

The corporate adoption of AI has been rapid and wide-ranging across organizations in all industries and sectors. MIT SMR and BCG’s 2019 report on AI and business strategy found that 90% of companies surveyed had made investments in the technology. But the adoption of RAI has been limited, with just over half of the 2022 survey respondents (52%) reporting that their organizations have an RAI program in place. Of those with an RAI program, a majority (79%) report that the program’s implementation is limited in scale and scope. More than half of respondents cited a lack of RAI expertise and talent (54%) and a lack of training or knowledge among staff members (53%) as key challenges that limit their organization’s ability to implement RAI initiatives.

RAI Leaders Walk the Talk

A small cohort of organizations, representing 16% of survey respondents, have taken a more strategic approach to RAI, investing the time and resources needed to create comprehensive RAI programs. These RAI Leaders have distinct characteristics compared with the remaining 84% of the survey population (who are characterized as Non-Leaders). Three-quarters (74%) of Leaders report that RAI is a part of the organization’s top management agenda, as opposed to just 46% of Non-Leaders. This prioritization is reflected in the commitment of 77% of Leaders to invest material resources in their RAI efforts, as opposed to just 39% of Non-Leaders. Leaders are far more likely than Non-Leaders to disagree that RAI is a “check the box” exercise (61% versus 44%, respectively). The survey results show that organizations with a box-checking approach to RAI are more likely to experience AI failures than Leader organizations.

Enterprise AI/ML transactions increased from 521 million monthly in April 2023 to 3.1 billion...
Applause has published its second Generative AI (Gen AI) survey to reveal the most popular use...
98% of top tech execs paused their corporate genAI initiatives to establish policies.
New collaboration builds upon EfficiencyIT’s long-established heritage of partnership with...
The strategic alliance delivers tailored, future-proof solutions to navigate complex business...
Adding AI for an immersive employee experience, RingCentral MVP is now RingEX.
Snowflake report unearths Python as the programming language of choice for AI development, while...
Helping CISOs embrace AI with confidence across the enterprise.